WinMain entry point - VS Linker vs Windows API - windows

I had been setting my application (MASM assembly language program) entry point via Visual Studio configurations, in the Project properties as:
Linker\System\SubSystem: Windows (/SUBSYSTEM:WINDOWS)
Linker\Advanced\Entry Point: WinMain
Any my main proc called WinMain (matching the above setting). It is a basic application that makes simple Windows API calls, e.g. MessageBoxA... and it works.
Now I'm building a Window application (in assembly), I read somewhere that I need to call the WinMain Windows API for an entry point.
I'm now confused! Which technique do I use to set the entry point to my application (exe)? The Windows API call 'WinMain' or the Visual Studio Linker entry point settings? And is the difference, i.e. C++ runtime vs OS?

If you are using the C runtime library (which is usually the case when programming in C) then you must not specify the linker entry point yourself. If you do, the runtime library will not be properly initialized, and any runtime library calls (including those inserted by the compiler) may fail.
Instead, your main function should correspond to the relevant standard: WinMain() for a GUI application, or main() for a console application.
In an assembly language program that is not linked to the C runtime library, you should specify an entry point of your choosing.
The signature of the native entry point is
DWORD CALLBACK RawEntryPoint(void);
Important:
Returning from the raw entry point implicitly calls ExitThread (see this answer) which is not usually the right thing to do, because if the Windows API has created any threads that you don't know about, the process won't exit until they do. Note that the Windows API documentation does not always indicate when a particular API function may cause a thread to be created.
Instead, you should explicitly call ExitProcess. This is what the C runtime library does when you return from WinMain() or main().

Related

Can you make a Windows desktop app in Rust and winapi?

I've got a Windows application with a GUI written in Rust and winapi. Despite its GUI, it behaves like a console application. When the exe file is started, a Command Prompt window pops up, and the application is run from it. This is not what I want; a main window should open instead, as in all real desktop apps. How can I achieve this goal in Rust with winapi?
I've investigated some options. You can develop Windows desktop applications using Tauri or gtk-rs, but both of these techniques have drawbacks when used for Windows apps. More options may be found here. I've also tried the windows-rs samples available on the internet, but they're all console apps with a graphical user interface, which isn't what I'm looking for.
I also note that C++ desktop applications use the function int APIENTRY wWinMain(...) as the entry point while console applications use int main(...), and wWinMain doesn't seem available in rust winapi.
Whether the system allocates a console for a newly created process is controlled by the Subsystem field in the Windows-specific optional PE header. The field is populated through the linker's /SUBSYSTEM command line option. The only relevant arguments for desktop applications are CONSOLE and WINDOWS. The former instructs the system to allocate a console on launch, whereas the latter won't.
You can instruct the linker to target the WINDOWS subsystem from Rust code by placing the per-module
#![windows_subsystem = "windows"]
attribute (see windows-subsystem) inside the main module of your application.
You'll find an example of this in the core_app sample of the windows crate.
This is the most convenient way to target the WINDOWS subsystem. You can also explicitly pass the linker flag along, e.g. by placing the following override into .cargo/config.toml:
[build]
rustflags = [
"-C", "link-arg=/SUBSYSTEM:WINDOWS",
]
This may or may not work, depending on the linker you happen to be using. Since the linker isn't part of the Rust toolchain, making sure that this works and has the intended effect is on you.
A note on the entry point's function name: It is irrelevant as far as the OS loader is concerned. It never even makes it into the final executable image anyway. The PE image only stores the (image-base-relative) AddressOfEntryPoint, and that symbol could have been named anything.
The concrete name is only relevant to the build tools involved in generating the respective linker input.
More info here: WinMain is just the conventional name for the Win32 process entry point. The underlying principles apply to Rust just the same, particularly the aspect that the user-defined entry point (fn main()) isn't actually the executable's entry point.

How does OpenGL find the implementation to use on Windows?

I have a Java program using OpenGL via JOGL, and there are some bugs that only appear on Windows that I'd like to debug. For this purpose, I tried setting up a spare computer with Windows, but encountered a strange problem when I went to debug my program:
When I run the program "normally" via Java Web Start, it works perfectly normally, but when I compiled the program and try to run it either via the command-line java launcher or via NetBeans (which I presume does the same thing), it appears to be using a different and very primitive OpenGL implementation that doesn't support programmable shading or anything.
When researching the problem, I've let myself understand that OpenGL programs running on Windows load opengl32.dll, which is apparently a common library that ships with Windows (correct me if I'm wrong) and which in turn loads the "real" OpenGL implementation and forwards OpenGL function calls to it. (It also appears to be somewhat of a misnomer, as it is in fact loaded in a 64-bit process at a base address clearly above 232.)
Using Process Explorer, I see that, when I run the program under Java Web Start (where it works), it loads the library ig4icd64.dll, which I assume is the actual OpenGL implementation library for the Intel GPU driver; whereas when trying to run the program via java.exe, opengl32.dll is loaded, but ig4icd64.dll is never loaded, which appears to confirm my suspicion that it's using a different OpenGL implementation.
So this leads to the main question, then: How does opengl32.dll select the OpenGL implementation to use, and how can I influence this choice to ensure the correct implementation is loaded? What means are available to debug this? (And what is different between these two contexts that causes it to choose different implementations? In both cases, 64-bit Java is used, so there should be no confusion between 32- or 64-bit implementations.)
Update: I found this page at Microsoft's site that claims that the OpenGL ICD is found by way of the OpenGLDriverName value in the HKLM/System/CurrentControlSet/Control/Class/{Adapter GUID}/0000/ registry key. That value does correctly contain ig4icd64.dll, however, and perhaps more strangely, using Process Monitor to monitor the syscalls (if that's the correct Windows terminology) of the Java process reveals that it never attempts to access that key. I can't say I know if that means that the article is incorrect, or if I'm using Process Monitor incorrectly, or if it's something else.
When researching the problem, I've let myself understand that OpenGL programs running on Windows load opengl32.dll, which is apparently a common library that ships with Windows (correct me if I'm wrong) and which in turn loads the "real" OpenGL implementation and forwards OpenGL function calls to it.
Yes, this is exactly how it works. opengl32.dll acts as a conduit between the Installable Client Driver (ICD) and the programs using OpenGL.
So this leads to the main question, then: How does opengl32.dll select the OpenGL implementation to use, and how can I influence this choice to ensure the correct implementation is loaded? What means are available to debug this?
It chooses based on the window class flags (that's not a Java class, but a set of settings for a window as part of the Windows API, see https://msdn.microsoft.com/en-us/library/windows/desktop/ms633577(v=vs.85).aspx for details), the window style flags the pixel format set for the window, the position of the window (which means which screen and graphics device it's on) and the context creation flags.
For example if you were to start it as a service then there's be no graphics device to create a window on at all. If you were to start it in a remote desktop session it would run on a headless, software rasterizer implementation.
I don't know the particular details in how the CLI java interpreter differs from WebStart. But IIRC you use javaw (note the extra w) for GUI programs.
(It also appears to be somewhat of a misnomer, as it is in fact loaded in a 64-bit process at a base address clearly above 2^32.)
It's not just opengl32.dll but all Windows system DLLs that are named …32 even in a 64 bit environment, and they're even located in \Windows\System32 to add to the confustion. For a very simple reason: Source code level backwards compatibility when compiling for 64 bits. If all the library names would have been changed to …64 then for compiling programs for a 64 bit environment all the string literals and references to the libraries would have to be renamed to …64.
If it makes you feel better about the naming, think of the …32 as a version designator, not an architecture thing: The Win32 API was developed in parallel for Windows 9x and Windows NT 3, so just in your mind let that …32 stand for "API version created for Windows NT 3.2".

Access violation inside LoadLibrary() call

I encounter an access violation when calling a DLL inside LabVIEW. Let's call the DLL "extcode.dll". I don't have its code, it comes from an external manufacturer.
Running it in Windbg, it stopped with the message:
(724.1200): Access violation - code c0000005 (first chance)
First chance exceptions are reported before any exception handling.
This exception may be expected and handled.
ntdll!RtlNewSecurityObjectWithMultipleInheritance+0x12a:
And call stack is:
ntdll!RtlNewSecurityObjectWithMultipleInheritance+0x12a
ntdll!MD5Final+0xedfc
ntdll!RtlFindClearBitsAndSet+0xdf4
ntdll!RtlFindClearBitsAndSet+0x3a8
ntdll!RtlFindClearBitsAndSet+0x4b9
ntdll!RtlCreateProcessParametersEx+0x829
ntdll!LdrLoadDll+0x9e
KERNELBASE!LoadLibraryExW+0x19c
KERNELBASE!LoadLibraryExA+0x51
LabVIEW!ChangeVINameWrapper+0x36f5
LabVIEW!ChangeVINameWrapper+0x3970
LabVIEW!ExtFuncDynLibWrapper+0x211
Note that dependencies of extcode.dll are loaded before access violation.
The situation is random, but when it happens all subsequent tries lead to it.
The code is a simple LabVIEW function calling a function in the DLL, and prototype is super simple (int function(void)) so it cannot be an misconfiguration of the call parameters, nor pointer arithmetics. I checked every combination of calling conventions and error checking levels.
The DLL runs perfectly fine when called in other environments (.NET and C).
I found that RtlFindClearBitsAndSet is related to bit array manipulations
What does it make you think about? Do you think it is a problem in extcode.dll, LabVIEW, or Windows?
PS: I use LabVIEW 2010 64 bit, on Windows 7 64 bit (and extcode.dll is 64 bit). I didn't manage to reproduce it on 32 bit system.
11/18 EDIT
I ended up making a standalone exe that wraps the DLL; LabVIEW communicates with it through pipes. It works perfectly, but I stil don't understand why loading a DLL into LabVIEW can crash.
If it works ok when called from C, you can quit working with Windbg because the DLL is probably ok. Something is wrong with how the DLL is being called, and once the DLL overwrites some of LabView's memory it is all over, even though it might take 1000 iterations before something actually goes kablooey.
First check your calling conventions, C or StdCall. C calling convention is the default and StdCall is almost certainly what you want. (Check the DLL header file.) LabView 2009 apparently did some auto-checking and fixing of calling conventions, but the switch to LLVM in LV 2010 has made this impossible; now it just tanks.
If it still tanks after changing this, check your calling arguments again. what you are passing, scalars or pointer data? You cannot access memory allocated by the DLL from LabView without doing some sneaky things, although you can allocate memory (i.e. byte array) in LabView and pass a pointer to it to the DLL for it to modify.
Also, if you are getting a pointer (such as a refnum) from an earlier call to DLL and returning it, check your pointer size. LabView's Call Library function now has a "pointer size integer" type, which generates the appropriately-sized type depending on whether it is invoked in 32-bit or 64-bit LabView. (It is always 64 bits on the wire, because that has to be defined at compile time.) The fact that your DLL works in 32 suggests this is a possibility.
Also keep in mind that C structs are often aligned by the (C) compiler. If you are passing a pointer to a struct made of a Uint8 and an UInt16, the C compiler will allocate 32 bits (or maybe even 64 bits) for this. You'll have to pad your struct (cluster) in LabView to make it match, or write a wrapper DLL to assemble the struct.
-Rob
An access violation (0xc0000005) will also be reported if DEP (Data Execution Prevention) is enabled on your machine and disapproves of something your binary (EXE or DLL) is trying to do. DEP is usually off by default on Windows XP, but active on Windows Vista / Windows 7.
DEP is a hardware-supported security measure designed to prevent malicious code executing some bytes that previously were considered "just some data"; I've had a few run-ins with it, all of which required re-compiling the offending binaries with a recent version of Microsoft Visual Studio; this allows to you set a flag which defines whether or not your binary supports DEP.
Some useful resources:
I found this MSDN blog entry
very helpful for gaining an
understanding of what DEP is and does
You might also want to consult this
technet article on how to turn
DEP on and off on Windows XP.
This is hard to diagnose remotely, but here are a couple of ideas.
The fact that your function takes no arguments means that either the function is truly trivial, or there is some stored state in the dll that takes into account previous function calls. Maybe the crash in this function is only an indicator, and you have a problem with a previous function call? Is there an initilization routine you're not calling?
If you only have problem when using 64 bit labview, my first guess would be that there's a problem with the 64 bit version of the dll, but if you are sure you don't have any problem with the exact same calls when using the dll in other environments, I'm stumped. One possibility is that you are using the wrong calling convention (stdcall vs. cdecl) in labview.
Have you tried importing the dll and header using the labview import wizard? This might help avoid silly mistakes with the prototypes.
One other thing to try: right click on the DLL call, choose configure and make sure you're running in the UI thread instead of any thread. Sometimes this helps.
When working with git and cygwin under NTFS, i found that sometimes the executable bit is not set (or un-set during checkout or some file operations) - inside cygwin cd to the folder and do
chmod a+rwx *.dll
and check if it changes a thing (and check if you want it this way!). I found this question while searching for LoadLibrary() failing with GetLastError() returning 5 (not "0xc0000005" btw) and solved the issue with this chmod call.

Flow of execution of file in WINDOWS

My question is:
What is the flow of execution of an executable file in WINDOWS? (i.e. What happens when we start a application.)
How does the OS integrate with the application to operate or handle the application?
Does it have a central control place that checks execution of each file or process?
Is the process registered in the execution directory? (if any)
What actually happens that makes the file perform its desired task within WINDOWS environment?
All help is appreciated...
There's plenty going on beyond the stages already stated (loading the PE, enumerating the dlls it depends on, calling their entry points and calling the exe entry point).
Probably the earliest action is the OS and the processor collaborating to create a new address space infrastructure (I think essentially a dedicated TLB). The OS initializes a Process Environment Block, with process-wide data (e.g., Process ID and environment variables). It initializes a Thread Environment Block, with thread-specific data (thread id, SEH root handlers, etc). Once an appropriate address is selected for every dll and it is loaded there, re-addressing of exported functions happens by the windows loader. (very briefly - at compile time, the dll cannot know the address at which it will be loaded by every consumer exe. the actual call addresses of its functions is thus determined only at load time). There are initializations of memory pages shared between processes - for windows messages, for example - and I think some initialization of disk-paging structures. there's PLENTY more going on. The main windows component involved is indeed the windows loader, but the kernel and executive are involved. Finally, the exe entry point is called - it is by default BaseProcessStart.
Typically a lot of preparation happens still after that, above the OS level, depending on used frameworks (canonical ones being CRT for native code and the CLR for managed): the framework must initialize its own internal structures to be able to deliver services to the application - memory management, exception handling, I/O, you name it.
A great place to go for such in depth discussions is Windows Internals. You can also dig a bit deeper in forums like SO, but you have to be able to break it down into more focused bits. As phrased, this is really too much for an SO post.
This is high-level and misses many details:
The OS reads the PE Header to find the dlls that the exe depends on and the exe's entry point
The DllMain function in all linked dlls is called with the DLL_PROCESS_ATTACH message
The entry point of the exe is called with the appropriate arguments
There is no "execution directory" or other central control other than the kernel itself. However, there are APIs that allow you to enumerate and query the currently-running processes, such as EnumProcesses.
Your question isn't very clear, but I'll try to explain.
When you open an application, it is loaded into RAM from your disk.
The operating system jumps to the entry point of the application.
The OS provides all the calls required for showing a window, connecting with stuff and receiving user input. It also manages processing time, dividing it evenly between applications.

System calls on Windows

I just want to ask, I know that standart system calls in Linux are done by int instruction pointing into Interrupt Vector Table. I assume this is similiar on Windows. But, how do you call some higher-level specific system routines? Such as how do you tell Windows to create a window? I know this is handled by the code in the dll, but what actually happend at assembler-instruction level? Does the routine in dll calls software interrupt by int instruction, or is there any different approach to handle this? Thanks.
Making a Win32 call to create a window is not really related to an interrupt. The client application is already linked with the .dll that provides the call which exposes the address for the linker to use. Since you are asking about the difference in calling mechanism, I'm limiting the discussion here to those Win32 calls that are available to any application as opposed to kernel-level calls or device drivers. At an assembly language level, it would be the same as any other function call since most Win32 calls are user-level calls which internally make the needed kernel calls. The linker provides the address of the Win32 function as the target for some sort of branching instruction, the specifics would depend on the compiler.
[Edit]
It looks like you are right about the interrupts and the int. vector table. CodeGuru has a good article with the OS details on how NT kernel calls work. Link:
http://www.codeguru.com/cpp/w-p/system/devicedriverdevelopment/article.php/c8035
Windows doesn't let you call system calls directly because system call numbers can change between builds, if a new system call gets added the others can shift forward or if a system call gets removed others can shift backwards. So to maintain backwards compatability you call the win32 or native functions in DLLs.
Now there are 2 groups of system calls, those serviced by the kernel (ntoskrnl) and by the win32 kernel layer (win32k).
Kernel system call stubs are easily accessible from ntdll.dll, while win32k ones are not exported, they're private within user32.dll. Those stubs contain the system call number and the actual system call instruction which does the job.
So if you wanted to create a window, you'd call CreateWindow in user32.dll, then the extended version CreateWindowEx gets called for backwards compatability, that calls the private system call stub NtUserCreateWindowEx, which invokes code within the win32k window manager.

Resources