Can you make a Windows desktop app in Rust and winapi? - windows

I've got a Windows application with a GUI written in Rust and winapi. Despite its GUI, it behaves like a console application. When the exe file is started, a Command Prompt window pops up, and the application is run from it. This is not what I want; a main window should open instead, as in all real desktop apps. How can I achieve this goal in Rust with winapi?
I've investigated some options. You can develop Windows desktop applications using Tauri or gtk-rs, but both of these techniques have drawbacks when used for Windows apps. More options may be found here. I've also tried the windows-rs samples available on the internet, but they're all console apps with a graphical user interface, which isn't what I'm looking for.
I also note that C++ desktop applications use the function int APIENTRY wWinMain(...) as the entry point while console applications use int main(...), and wWinMain doesn't seem available in rust winapi.

Whether the system allocates a console for a newly created process is controlled by the Subsystem field in the Windows-specific optional PE header. The field is populated through the linker's /SUBSYSTEM command line option. The only relevant arguments for desktop applications are CONSOLE and WINDOWS. The former instructs the system to allocate a console on launch, whereas the latter won't.
You can instruct the linker to target the WINDOWS subsystem from Rust code by placing the per-module
#![windows_subsystem = "windows"]
attribute (see windows-subsystem) inside the main module of your application.
You'll find an example of this in the core_app sample of the windows crate.
This is the most convenient way to target the WINDOWS subsystem. You can also explicitly pass the linker flag along, e.g. by placing the following override into .cargo/config.toml:
[build]
rustflags = [
"-C", "link-arg=/SUBSYSTEM:WINDOWS",
]
This may or may not work, depending on the linker you happen to be using. Since the linker isn't part of the Rust toolchain, making sure that this works and has the intended effect is on you.
A note on the entry point's function name: It is irrelevant as far as the OS loader is concerned. It never even makes it into the final executable image anyway. The PE image only stores the (image-base-relative) AddressOfEntryPoint, and that symbol could have been named anything.
The concrete name is only relevant to the build tools involved in generating the respective linker input.
More info here: WinMain is just the conventional name for the Win32 process entry point. The underlying principles apply to Rust just the same, particularly the aspect that the user-defined entry point (fn main()) isn't actually the executable's entry point.

Related

Do Windows Applications usually have a console for StdIn, StdOut and StdErr

I ran into the following issue using Pascal/FPC/Lazarus, but I think it is universal to all Windows .exe files, regardless of the IDE/compiler they are created with:
I created a Windows GUI application and wanted to display some debugging infos in a simple text console. Usually in a Pascal console application Write and WriteLn are used to write to a console/StdOut, but without additional measures in the project configuration this crashes because in a GUI exe (at least if created with Lazarus) a console window does not exist, I get a "file not open" exception.
There are multiple ways in Lazarus (centered around controlling the -WG switch during the build process) to get a console attached "write /writeln" can write text to, this is not the question. My question is, whether support for a console device (StdIn, StdOut, StdErr) is a Windows feature, which is part of the Windows Runtime, probably controlled by some metadata embedded in the exe, which in turn is controlled by this -WG switch, or whether it is a feature of a runtime environment added by a specific development environment, in this case by the Lazarus IDE or a runtime coming with the underlying FPC compiler.
Thnx!
Yes, consoles are generally not used for Windows GUI apps, though you can afaik instantiate some with allocconsole manually. Very early versions of Lazarus did this.
As David says, in general on Windows Outputdebugstring() is used or a logging library.
Technical details: afaik all windows processes (so also console) must actively activate the console, a task typically done by the runtime. The -WG switch suppresses this by setting a special IsConsole boolean variable to false.
The console io initialization is in rtl/win/syswin.inc procedure SysInitStdIO around line 515.
In there you can see that if not IsConsole a dummy file description is made (assignerror), and errors are redirected to message boxes(and might pass by GUI users unnoticed).

how to run D-Bus on windows without console?

I'm porting a linux app on windows and I need dbus-daemon.exe running on my win session.
My app and dbus-daemon.exe work fine but the latter still opens a default console and, being not familiar with programming on windows, I don't know how to get rid of it.
Maybe by making it invisible ?
Windows, by default, opens a console window for executables compiled for the console subsystem (the "subsystem" being essentially a bit of metadata in the Portable Executable format, aka EXE/DLL). So you have at least two options:
Compile the dbus-daemon for the Windows subsystem, if you're the one doing the compilation. It is a linker option.
Launch the dbus-daemon process passing the CREATE_NO_WINDOW flag to the relevant API function (probably CreateProcess). If you're not using the Windows API directly, look how CreateProcess and CREATE_NO_WINDOW are exposed in the API you are using. In .NET, for example, it's the ProcessStartInfo.CreateNoWindow property.

How does OpenGL find the implementation to use on Windows?

I have a Java program using OpenGL via JOGL, and there are some bugs that only appear on Windows that I'd like to debug. For this purpose, I tried setting up a spare computer with Windows, but encountered a strange problem when I went to debug my program:
When I run the program "normally" via Java Web Start, it works perfectly normally, but when I compiled the program and try to run it either via the command-line java launcher or via NetBeans (which I presume does the same thing), it appears to be using a different and very primitive OpenGL implementation that doesn't support programmable shading or anything.
When researching the problem, I've let myself understand that OpenGL programs running on Windows load opengl32.dll, which is apparently a common library that ships with Windows (correct me if I'm wrong) and which in turn loads the "real" OpenGL implementation and forwards OpenGL function calls to it. (It also appears to be somewhat of a misnomer, as it is in fact loaded in a 64-bit process at a base address clearly above 232.)
Using Process Explorer, I see that, when I run the program under Java Web Start (where it works), it loads the library ig4icd64.dll, which I assume is the actual OpenGL implementation library for the Intel GPU driver; whereas when trying to run the program via java.exe, opengl32.dll is loaded, but ig4icd64.dll is never loaded, which appears to confirm my suspicion that it's using a different OpenGL implementation.
So this leads to the main question, then: How does opengl32.dll select the OpenGL implementation to use, and how can I influence this choice to ensure the correct implementation is loaded? What means are available to debug this? (And what is different between these two contexts that causes it to choose different implementations? In both cases, 64-bit Java is used, so there should be no confusion between 32- or 64-bit implementations.)
Update: I found this page at Microsoft's site that claims that the OpenGL ICD is found by way of the OpenGLDriverName value in the HKLM/System/CurrentControlSet/Control/Class/{Adapter GUID}/0000/ registry key. That value does correctly contain ig4icd64.dll, however, and perhaps more strangely, using Process Monitor to monitor the syscalls (if that's the correct Windows terminology) of the Java process reveals that it never attempts to access that key. I can't say I know if that means that the article is incorrect, or if I'm using Process Monitor incorrectly, or if it's something else.
When researching the problem, I've let myself understand that OpenGL programs running on Windows load opengl32.dll, which is apparently a common library that ships with Windows (correct me if I'm wrong) and which in turn loads the "real" OpenGL implementation and forwards OpenGL function calls to it.
Yes, this is exactly how it works. opengl32.dll acts as a conduit between the Installable Client Driver (ICD) and the programs using OpenGL.
So this leads to the main question, then: How does opengl32.dll select the OpenGL implementation to use, and how can I influence this choice to ensure the correct implementation is loaded? What means are available to debug this?
It chooses based on the window class flags (that's not a Java class, but a set of settings for a window as part of the Windows API, see https://msdn.microsoft.com/en-us/library/windows/desktop/ms633577(v=vs.85).aspx for details), the window style flags the pixel format set for the window, the position of the window (which means which screen and graphics device it's on) and the context creation flags.
For example if you were to start it as a service then there's be no graphics device to create a window on at all. If you were to start it in a remote desktop session it would run on a headless, software rasterizer implementation.
I don't know the particular details in how the CLI java interpreter differs from WebStart. But IIRC you use javaw (note the extra w) for GUI programs.
(It also appears to be somewhat of a misnomer, as it is in fact loaded in a 64-bit process at a base address clearly above 2^32.)
It's not just opengl32.dll but all Windows system DLLs that are named …32 even in a 64 bit environment, and they're even located in \Windows\System32 to add to the confustion. For a very simple reason: Source code level backwards compatibility when compiling for 64 bits. If all the library names would have been changed to …64 then for compiling programs for a 64 bit environment all the string literals and references to the libraries would have to be renamed to …64.
If it makes you feel better about the naming, think of the …32 as a version designator, not an architecture thing: The Win32 API was developed in parallel for Windows 9x and Windows NT 3, so just in your mind let that …32 stand for "API version created for Windows NT 3.2".

WinMain entry point - VS Linker vs Windows API

I had been setting my application (MASM assembly language program) entry point via Visual Studio configurations, in the Project properties as:
Linker\System\SubSystem: Windows (/SUBSYSTEM:WINDOWS)
Linker\Advanced\Entry Point: WinMain
Any my main proc called WinMain (matching the above setting). It is a basic application that makes simple Windows API calls, e.g. MessageBoxA... and it works.
Now I'm building a Window application (in assembly), I read somewhere that I need to call the WinMain Windows API for an entry point.
I'm now confused! Which technique do I use to set the entry point to my application (exe)? The Windows API call 'WinMain' or the Visual Studio Linker entry point settings? And is the difference, i.e. C++ runtime vs OS?
If you are using the C runtime library (which is usually the case when programming in C) then you must not specify the linker entry point yourself. If you do, the runtime library will not be properly initialized, and any runtime library calls (including those inserted by the compiler) may fail.
Instead, your main function should correspond to the relevant standard: WinMain() for a GUI application, or main() for a console application.
In an assembly language program that is not linked to the C runtime library, you should specify an entry point of your choosing.
The signature of the native entry point is
DWORD CALLBACK RawEntryPoint(void);
Important:
Returning from the raw entry point implicitly calls ExitThread (see this answer) which is not usually the right thing to do, because if the Windows API has created any threads that you don't know about, the process won't exit until they do. Note that the Windows API documentation does not always indicate when a particular API function may cause a thread to be created.
Instead, you should explicitly call ExitProcess. This is what the C runtime library does when you return from WinMain() or main().

Create Windows Program with GUI but use 'main()' (in D)?

Is it possible, to in Windows, create a GUI program, which has it's entry point in 'main()'? How do I do this?
My use for this is that I want a cross-platform application, with one uniform entry point.
Write your application using main() and all the GUI calls in there that you would have used in WinMain. This will create an application with both a GUI and a console window.
Use the Windows SDK tool editbin /SUBSYSTEM:WINDOWS appname.exe to change the subsystem flag in the PE header, so Windows won't create a console window automatically.
If you want to have a working stdout for debug message or the like, you can either use freopen to direct stdout to a file, or AllocConsole when you decide a console window is needed (for example, after an error occurs).
BTW: This thread indicates that the DMD compiler will prefer main() over WinMain() anyway if it finds both.

Resources