How does OpenGL find the implementation to use on Windows? - windows

I have a Java program using OpenGL via JOGL, and there are some bugs that only appear on Windows that I'd like to debug. For this purpose, I tried setting up a spare computer with Windows, but encountered a strange problem when I went to debug my program:
When I run the program "normally" via Java Web Start, it works perfectly normally, but when I compiled the program and try to run it either via the command-line java launcher or via NetBeans (which I presume does the same thing), it appears to be using a different and very primitive OpenGL implementation that doesn't support programmable shading or anything.
When researching the problem, I've let myself understand that OpenGL programs running on Windows load opengl32.dll, which is apparently a common library that ships with Windows (correct me if I'm wrong) and which in turn loads the "real" OpenGL implementation and forwards OpenGL function calls to it. (It also appears to be somewhat of a misnomer, as it is in fact loaded in a 64-bit process at a base address clearly above 232.)
Using Process Explorer, I see that, when I run the program under Java Web Start (where it works), it loads the library ig4icd64.dll, which I assume is the actual OpenGL implementation library for the Intel GPU driver; whereas when trying to run the program via java.exe, opengl32.dll is loaded, but ig4icd64.dll is never loaded, which appears to confirm my suspicion that it's using a different OpenGL implementation.
So this leads to the main question, then: How does opengl32.dll select the OpenGL implementation to use, and how can I influence this choice to ensure the correct implementation is loaded? What means are available to debug this? (And what is different between these two contexts that causes it to choose different implementations? In both cases, 64-bit Java is used, so there should be no confusion between 32- or 64-bit implementations.)
Update: I found this page at Microsoft's site that claims that the OpenGL ICD is found by way of the OpenGLDriverName value in the HKLM/System/CurrentControlSet/Control/Class/{Adapter GUID}/0000/ registry key. That value does correctly contain ig4icd64.dll, however, and perhaps more strangely, using Process Monitor to monitor the syscalls (if that's the correct Windows terminology) of the Java process reveals that it never attempts to access that key. I can't say I know if that means that the article is incorrect, or if I'm using Process Monitor incorrectly, or if it's something else.

When researching the problem, I've let myself understand that OpenGL programs running on Windows load opengl32.dll, which is apparently a common library that ships with Windows (correct me if I'm wrong) and which in turn loads the "real" OpenGL implementation and forwards OpenGL function calls to it.
Yes, this is exactly how it works. opengl32.dll acts as a conduit between the Installable Client Driver (ICD) and the programs using OpenGL.
So this leads to the main question, then: How does opengl32.dll select the OpenGL implementation to use, and how can I influence this choice to ensure the correct implementation is loaded? What means are available to debug this?
It chooses based on the window class flags (that's not a Java class, but a set of settings for a window as part of the Windows API, see https://msdn.microsoft.com/en-us/library/windows/desktop/ms633577(v=vs.85).aspx for details), the window style flags the pixel format set for the window, the position of the window (which means which screen and graphics device it's on) and the context creation flags.
For example if you were to start it as a service then there's be no graphics device to create a window on at all. If you were to start it in a remote desktop session it would run on a headless, software rasterizer implementation.
I don't know the particular details in how the CLI java interpreter differs from WebStart. But IIRC you use javaw (note the extra w) for GUI programs.
(It also appears to be somewhat of a misnomer, as it is in fact loaded in a 64-bit process at a base address clearly above 2^32.)
It's not just opengl32.dll but all Windows system DLLs that are named …32 even in a 64 bit environment, and they're even located in \Windows\System32 to add to the confustion. For a very simple reason: Source code level backwards compatibility when compiling for 64 bits. If all the library names would have been changed to …64 then for compiling programs for a 64 bit environment all the string literals and references to the libraries would have to be renamed to …64.
If it makes you feel better about the naming, think of the …32 as a version designator, not an architecture thing: The Win32 API was developed in parallel for Windows 9x and Windows NT 3, so just in your mind let that …32 stand for "API version created for Windows NT 3.2".

Related

Can you make a Windows desktop app in Rust and winapi?

I've got a Windows application with a GUI written in Rust and winapi. Despite its GUI, it behaves like a console application. When the exe file is started, a Command Prompt window pops up, and the application is run from it. This is not what I want; a main window should open instead, as in all real desktop apps. How can I achieve this goal in Rust with winapi?
I've investigated some options. You can develop Windows desktop applications using Tauri or gtk-rs, but both of these techniques have drawbacks when used for Windows apps. More options may be found here. I've also tried the windows-rs samples available on the internet, but they're all console apps with a graphical user interface, which isn't what I'm looking for.
I also note that C++ desktop applications use the function int APIENTRY wWinMain(...) as the entry point while console applications use int main(...), and wWinMain doesn't seem available in rust winapi.
Whether the system allocates a console for a newly created process is controlled by the Subsystem field in the Windows-specific optional PE header. The field is populated through the linker's /SUBSYSTEM command line option. The only relevant arguments for desktop applications are CONSOLE and WINDOWS. The former instructs the system to allocate a console on launch, whereas the latter won't.
You can instruct the linker to target the WINDOWS subsystem from Rust code by placing the per-module
#![windows_subsystem = "windows"]
attribute (see windows-subsystem) inside the main module of your application.
You'll find an example of this in the core_app sample of the windows crate.
This is the most convenient way to target the WINDOWS subsystem. You can also explicitly pass the linker flag along, e.g. by placing the following override into .cargo/config.toml:
[build]
rustflags = [
"-C", "link-arg=/SUBSYSTEM:WINDOWS",
]
This may or may not work, depending on the linker you happen to be using. Since the linker isn't part of the Rust toolchain, making sure that this works and has the intended effect is on you.
A note on the entry point's function name: It is irrelevant as far as the OS loader is concerned. It never even makes it into the final executable image anyway. The PE image only stores the (image-base-relative) AddressOfEntryPoint, and that symbol could have been named anything.
The concrete name is only relevant to the build tools involved in generating the respective linker input.
More info here: WinMain is just the conventional name for the Win32 process entry point. The underlying principles apply to Rust just the same, particularly the aspect that the user-defined entry point (fn main()) isn't actually the executable's entry point.

missing ADO connection functionality when running old vb6 app on win 2016

*THIS HAS BEEN EDITED, SEE BOTTOM. I CHANGED THE TITLE TO BETTER REFLECT THE PROBLEM.**
I have an old vb6 application that I put on a windows 2016 server and been having issues with dependency files. I ran process monitor and started putting the dll files in the locations where it is looking at, most of them have cleared up.
I'm getting one that I cant find on the old win 2000 box or anywhere else: wow64log.dll
Where can I can get this file? I attached pics of proc mon and the list of dependencies that the app is requiring. any direction would be appreciated. third pic is the actual error when trying to open the app. edit added the dependency walker screen shot
EDIT***
so I have narrowed down the issue and it boils down to an ADO connection. I cant seem to connect on windows server 2016 using ADO. I suspect it has something to do with the connection string, but what baffles me is why does this work on a win 10,1803 box and not on windows server 2016 1607 ?
this is basically my issue - https://social.msdn.microsoft.com/Forums/SECURITY/en-US/f1eee40b-6ab2-445f-a361-ae965439273a/run-time-error-214746725980004005-for-using-adodbconnection?forum=isvvba
I suspect that this is not an actual error in the runtime of your program, If you are only looking at Procmon, be aware that it shows a lot of stuff and sometimes not all the "errors" there are really relevant. For instance, it will often show how Windows functions look in a long list of search paths, each failing in turn, before that actual location of a DLL is detected.
In this case, it seems most likely that a missing wow64log.dll is harmless and apparently, totally normal.
"WoW64" is the Windows subsystem which runs 32 bit programs inside the 64 bit operating system. ("WoW" stands for "Windows-on-Windows".)
According to the reference WoW64 Internals describing how this subsystem is initialized:
wow64!ProcessInit
...
It … tries to load the wow64log.dll from the constructed system
directory. Note that this DLL is never present in any released
Windows installation (it’s probably used internally by Microsoft for
debugging of the WoW64 subsystem). Therefore, load of this DLL will
normally fail. This isn’t problem, though, because no critical
functionality of the WoW64 subsystem depends on it.
Although that article is talking about the ARM64 architecture (which AFAIK is not what most PCs would be using) it sounds like much of the WoW64 system is similar to normal PCs.

Where is vendor's OpenGL driver located in Windows?

I'm going try to experiment C# with OpenGL modern driver in Windows 10 and I'm trying to find it.
As I have understand the standard driver Openg32.dll, which is located at %systemroot%\system32 is an old one and seems to be it's from Microsoft, am I right?
I came to this conclusion, because of using the next command:
dumpbin opengl32.dll /exports
And found the function:
11 A 00090330 glBegin
As I remember, this function as glLoadIdentity, glMultMatrix, glTranslate, glRotate are deprecated and NOT included since OpenGL 3.2+, because you have to do matrix math on your own & use shaders.
OK, I begin to search at NVidia directory (the vendor of my video card is NVidia) C:\Program Files\NVIDIA Corporation, but have found only OpenCL drivers C:\Program Files\NVIDIA Corporation\OpenCL:
OpenCL.dll
OpenCL64.dll
Any of them is perfectly dumped via: dumpbin /exports
But I can't find here the OpenGL driver exactly. Maybe it has some specific name like nvdisps.dll or something else?
PS (if you ask me about)
I know about, that it's more recommended to use C++ for this stuff
I don't want to use already done libraries
I want P/Invoke stuff and just try to do it with C#
The opengl32.dll acts as a "conduit" toward the actual OpenGL driver, the so called "ICD" (Installable Client Driver); it also contains OpenGL-1.1 fallback code, since OpenGL has been part of the Win32-API application binary interface contract (i.e. programs running on Win95b or WinNT-4 or later can expect a working OpenGL-1.1 implementation).
The vendor ICD registers itself (in the Windows registry, for details see https://msdn.microsoft.com/en-us/library/windows/hardware/ff568203%28v=vs.85%29.aspx) and the opengl32.dll loads the appropriate ICD. The name of the ICD is not fixed. You can easily find the used ICD, by passing an invalid pointer to a OpenGL function that expects a buffer; the access violation will happen inside the ICD's code.

Is DLL loaded in kernel mode or user mode?

I was asked such a question in an interview:
In windows, suppose there is an exe which depends on some dlls, when you start
the exe, and then the dependent dlls will be loaded, are these dlls
loaded in kernel mode or user mode?
I am not quite sure about the question, not the mention the answer - could you help to explain?
Thanks.
I'm not an expert about how Windows internally works, but for what i know the correct answer is user mode, simply because only the processes related to your Operative System are admitted in the kernel space http://en.wikibooks.org/wiki/Windows_Programming/User_Mode_vs_Kernel_Mode
Basically if it's not an OS process, it's going to be allocated in the user space.
The question is very imprecise/ambiguous. "In Windows" suggests something but isn't clear what. Likely the interviewer was referring to the Win32 subsystem - i.e. the part of Windows that you usually get to see as an end-user. The last part of the question is even more ambiguous.
Now while process and section objects (in MSDN referred to as MMF, loaded PE images such as .exe and .dll and .sys) are indeed kernel objects and require some assistance from the underlying executive (and memory manager etc) the respective code in the DLL (including that in DllMain) will behave exactly the same as for any other user mode process, when called from a user mode process. That is, each thread that is running code from the DLL will transition to kernel mode to make use of OS services eventually (opening files, loading PE files, creating events etc) or do some stuff in user mode whenever that is sufficient.
Perhaps the interviewer was even interested in the memory ranges that are sometimes referred to as "kernel space" and "user space", traditionally at the 2 GB boundary for 32bit. And yes, DLLs usually end up below the 2 GB boundary, i.e. in "user space", while other shared memory (memory mapped files, MMF) usually end up above that boundary.
It is even possible that the interviewer fell victim to a common misunderstanding about DLLs. The DLL itself is merely a dormant piece of memory, it isn't running anything on its own ever (and yes, this is also true for DllMain). Sure, the loader will take care of all kinds of things such as relocations, but in the end nothing will run without being called explicitly or implicitly (in the context of some thread of the process loading the DLL). So for all practical purposes the question would require you to ask back.
Define "in Windows".
Also "dlls loaded in kernel mode or user mode", does this refer to the code doing the loading or to the end result (i.e. where the code runs or in what memory range it gets loaded)? Parts of that code run in user mode, others in kernel mode.
I wonder whether the interviewer has a clear idea of the concepts s/he is asking about.
Let me add some more information. It seems from the comments on the other answer that people have the same misconception that exists about DLLs also about drivers. Drivers are much closer to the idea of DLLs than to that of EXEs (or ultimately "processes"). The thing is that a driver doesn't do anything on its own most of the time (though it can create system threads to change that). Drivers are not processes and they do not create processes.
The answer is quite obviously User mode for anybody who does any kind of significant application development for windows. Let me explain two things.
DLL
A dynamic link library is closely similar to a regular old link library or .lib. When your application uses a .lib it pastes in function definitions just after compile time. You typically use a .lib to store API's and to modify the functions with out having to rebuild the whole project, just paste new .lib with same name over the old and as long as the interface(function name and parameters) hasn't changed it still works. Great modularity.
A .dll does exactly the same thing however it doesn't require re-linking or any compilation. You can think of a .dll as essentially a .lib which gets compiled to an .exe just the same as applications which use it. Simply put the new .dll which shares the name and function signatures and it all just works. You can update your application simply by replacing .dlls. This is why most windows software consists of .dlls and a few exe's.
The usage of a .dll is done in two ways
Implicit linking
To link this way if you had a .dll userapplication.dll you would have an userapplication.lib which defines all the entry points in the dll. You simply link to the static link library and then include the .dll in the working directory.
Explicit linking
Alernatively you can programmatically load the .dll by first calling LoadLibrary(userapplication.dll) which returns a handle to your .dll. Then GetProcAddress(handle, "FunctionInUserApplicationDll") which returns a function pointer you can use. This way your application can check stuff before attempting to use it. c# is a little different but easier.
USER/KERNEL MODES
Windows has two major modes of execution. User mode and Kernel modes (kernel further divided into system and sessions). For user mode the physical memory address is opaque. User mode makes use of virtual memory which is mapped to real memory spaces. User mode driver's are coincidentally also .dll's. A user mode application typically gets around 4Gb of virtual addressing space to work with. Two different applications can not meaningfully use those address because they are with in context of that application or process. There is no way for a user mode application to know it's physical memory address with out falling back to kernel mode driver. Basically everything your used to programming (unless you develop drivers).
Kernel mode is protected from user mode applications. Most hardware drivers work in the context of kernel mode and typically all windows api's are broken into two categories user and kernel. Kernel mode drivers use kernel mode api's and do not use user mode api's and hence don't user .dll's(You can't even print to a console cause that is a user mode api set). Instead they use .sys files which are drivers and essentially work exactly the same way in user mode. A .sys is an pe format so basically an .exe just like a .dll is like an .exe with out a main() entry point.
So from the askers perspective you have two groups
[kernel/.sys] and [user/.dll or .exe]
There really isn't .exe's in kernel because the operating system does everything not users. When system or another kernel component starts something they do it by calling DriverEntry() method so I guess that is like main().
So this question in this sense is quite simple.

Best approach for debugging a Win 16-bit application?

I must reverse a legacy windows (16-bit, NE exec) application that controls an old DAQ that I must interface somehow with upgraded hardware. I've been able to disassemble the exec using W32Dasm (and WindowsCodeBack as well, the only two from many that I've tried that have worked) but the resulting asm file contains too many lines. I'd like to use a debugger and set some breakpoints to restrict the work. Could you advise which is the right approach to debug a Win16 app in 32-bit times? A VM running Windows98 for example? Which Win16 debugger could I use?
Many thanks
IDA can disassemble Win16 programs as well (though not the free version), and it's much more convenient than plain dead listing.
As for debuggers, I would try to find the Win16 Turbo Debugger (TDW.EXE). There's also OpenWatcom, which even supports remote debugging (so you can run the program in a VM and the debugger UI on your desktop).

Resources