Do DLLs built with Rust require libgcc.dll on run time? - windows

If I build a DLL with Rust language, does it require libgcc*.dll to be present on run time?
On one hand:
I've seen a post somewhere on the Internet, claiming that yes it does;
rustc.exe has libgcc_s_dw2-1.dll in its directory, and cargo.exe won't run without the dll when downloaded from the http://crates.io website;
On the other hand:
I've seen articles about building toy OS kernels in Rust, so they most certainly don't require libgcc dynamic library to be present.
So, I'm confused. What's the definite answer?

Rust provides two main toolchains for Windows: x86_64-pc-windows-gnu and x86_64-pc-windows-msvc.
The -gnu toolchain includes an msys environment and uses GCC's ld.exe to link object files. This toolchain requires libgcc*.dll to be present at runtime. The main advantage of this toolchain is that it allows you to link against other msys provided libraries which can make it easier to link with certain C\C++ libraries that are difficult to under the normal Windows environment.
The -msvc toolchain uses the standard, native Windows development tools (either a Windows SDK install or a Visual Studio install). This toolchain does not use libgcc*.dll at either compile or runtime. Since this toolchain uses the normal windows linker, you are free to link against any normal Windows native libraries.
If you need to target 32-bit Windows, i686- variants of both of these toolchains are available.
NOTE: below answer summarizes situation as of Sep'2014; I'm not aware if it's still current, or if things have changed to better or worse since then. But I strongly suspect things have changed, given that 2 years have already passed since then. It would be cool if somebody tried to ask steveklabnik about it again, then update below info, or write a new, fresher answer!
Quick & raw transcript of a Rust IRC chat with steveklabnik, who gave me a kind of answer:
Hi; I have a question: if I build a DLL with Rust, does it require libgcc*.dll to be present on run time? (on Windows)
I believe that if you use the standard library, then it does require it;
IIRC we depend on one symbol from it;
but I am unsure.
How can I avoid using the standard library, or those parts of it that do? (and/or do you know which symbol exactly?)
It involves #[no_std] at your crate root; I think the unsafe guide has more.
Running nm -D | grep gcc shows me __gc_personality_v0, and then there is this: What is __gxx_personality_v0 for?,
so it looks like our stack unwinding implementation depends on that.
I seem to recall I've seen some RFCs to the effect of splitting standard library, too; are there parts I can use without pulling libgcc in?
Yes, libcore doesn't require any of that.
You give up libstd.
Also, quoting parts of the unsafe guide:
The core library (libcore) has very few dependencies and is much more portable than the standard library (libstd) itself. Additionally, the core library has most of the necessary functionality for writing idiomatic and effective Rust code. (...)
Further libraries, such as liballoc, add functionality to libcore which make other platform-specific assumptions, but continue to be more portable than the standard library itself.
And fragment of the current docs for unwind module:
Currently Rust uses unwind runtime provided by libgcc.
(The transcript was edited slightly for readability. Still, I'll happily delete this answer if anyone provides something better formatted and more thorough!)

Related

What is the appropriate Unix-POSIX based tool chain to use in Windows

Let's say I have retrieved some C/C++ original and unmodified distribution libraries that were specifically designed for the Unix-POSIX based environments where the original developers who designed their code wrote them specifically for their environments - systems and these libraries originally are not portable to modern Windows systems.
What I would like to be able to do is to at the least build the needed static or dynamic libraries to be able to link them against my own Visual Studio projects.
I know that with Mingw(clang) you only need the MSVC run-time libraries as it doesn't have any need of the POSIX dll dependencies as it uses the Win32 libraries directly. However, it doesn't have all of the Unix environment features that Cygwin(gcc/g++) does and for any C/C++ code that relies on native POSIX functionality such as fork(), mmap(), etc. would have to be re-implemented into the Win32 equivalents for proper compilation, linking, and running of the application.
With Cygwin I'm more exposed to nearly all of the Unix-POSIX features with a little higher learning curve and to integrate these libraries that are built by Cygwin's compiler(s), they would rely on cygwin1.dll to be able to run on a Windows machine.
A Primary Example:
The current libraries that I'm trying to build to work with Visual Studio are GNU's: GMP, MPIR, MPFR, MPFRC++.
So far I have successfully been able to build MPIR in Visual Studio 2017 with the aid of Python and Windows version of Yasm. However, when trying to build MPFR it requires the dependency of GMP. So now I have to build GMP.
I could use Mingw to build GMP which may be in some ways a little easier, but by using Gygwin and building GMP through the Unix/Linux/POSIX environment I would be exposing myself to the functionality of Unix/POSIX systems.
Note - I'm primarily familiar with Windows environments and until recently have never worked with or on any Unix based OS. So there is a bit of a learning curve for me. I'm doing all the research and reading that I can on my own which is not a problem. It provides good experience with every bit of trial and error.
What I would like to know is when working on a Windows machine; what would be the preferable method between the two case scenarios to build POSIX designed libraries to be able to link properly into MSVC Window's based applications? Another words, I would like to efficiently convert POSIX specific libraries to be able to work on my current platform or machine. Notice that I did not say that I wanted to "rewrite" the libraries to make them portable to any arbitrary environment. In this specific case I only need them to run on my Windows environment. Or any other appropriate method. I will be using some C and some C++ libraries to be linked into my MSVC c++17 project(s). (I may also have some of the terminology wrong in some of my above statements or assumptions as I'm not familiar with Unix-POSIX environments).

gdb, how to step into c runtime? Where is crt_c.c?

When I'm stepping into debugged program, it says that it can't find crt/crt_c.c file. I have sources of gcc 6.3.0 downloaded, but where is crt_c.c in there?
Also how can I find source code for printf and rand in there? I'd like to step through them in debugger.
Ide is codeblocks, if that's important.
Edit: I'm trying to do so because I'm trying to decrease size of my executable. Going straight into freestanding leaves me with a lot of missing functions, so I intend to study and replace them one by one. I'm trying to do that to make my program a little smaller and faster, and to be able to study assembly output a bit easier.
Also, forgot to mention, I'm on windows, msys2. But answer is still helpful.
How can I find source code for printf and rand in there?
They (printf, rand, etc....) are part of your C standard library which (on Linux) is outside of the GCC compiler. But crt0 is provided by GCC (however, is often not compiled with debug information) and some C files there are generated in the build tree during compilation of GCC.
(on Windows, most of the C standard library is proprietary -inside some DLL provided by MicroSoft- and you are probably forbidden to look into the implementation or to reverse-engineer it; AFAIK EU laws might mention some exception related to interoperability¸ but then you need to consult a lawyer and I am not a lawyer)
Look into GNU glibc (or perhaps musl-libc) if you want to study its source code. libc is generally using system calls (listed in syscalls(2)) provided by the Linux kernel.
I'd like to step through them in debugger.
In practice you won't be able to do that easily, because the libc is provided by your distribution and has generally been compiled without debug information in DWARF format.
Some Linux distributions provide a debuggable variant of libc, perhaps as some libc6-dbg package.
(your question lacks motivation and smells like some XY problem)
I intend to study and replace them one by one.
This is very unrealistic (particularly on Windows, whose system call interface is not well documented) and could take you many years (or perhaps more than a lifetime). Do you have that much time?
Read also Operating Systems: Three Easy Pieces and look into OsDev wiki.
I'm trying to do so because I'm trying to decrease size of my executable.
Wrong approach. A debugger needs debug info (e.g. in DWARF) which will increase the size of the executable (but could later be stripped). BTW standard C functions are in some common shared library (or DLL on Windows) which is used by many processes.
I'm on windows, msys2.
Bad choice. Windows is proprietary. Linux is made of free software (more than ten billions lines of source code, if you consider all useful packages inside a typical Linux distribution), whose source code you could study (even if it would take several lifetimes).

Can I mix arm-eabi with arm-elf?

I have a product which bootloader and application are compiled using a compiler (gnuarm GCC 4.1.1) that generates "arm-elf".
The bootloader and application are segregated in different FLASH memory areas in the linker script.
The application has a feature that enables it to call the bootloader (as a simple c-function with 2 parameters).
I need to be able to upgrade existing products around the world, and I can safely do this using always the same compiler.
Now I'd like to be able to compile this product application using a new GCC version that outputs arm-eabi.
Everything will be fine for new products, where both application and bootloader are compiled using the same toolchain, but what happens with existing products?
If I flash a new application, compiled with GCC 4.6.x and arm-none-eabi, will my application still be able to call the bootloader function from the old arm-elf bootloader?
Furthermore, not directly related to the above question, can I mix object files compiled with arm-elf into a binary compiled with arm-eabi?
EDIT:
I think is good to make clear I am building for a bare metal ARM7, if it makes any difference...
No. An ABI is the magic that makes binaries compatible. The Application Binary Interface determines various conventions on how to communicate with other libraries/applications. For example, an ABI will define calling convention, which makes implicit assumptions about things like which registers are used for passing arguments to C functions, and how to deal with excess arguments.
I don't know the exact differences between EABI and ABI, but you can find some of them by reading up on EABI. Debian's page mentions the syscall convention is different, along with some alignment changes.
Given the above, of course, you cannot mix arm-elf and arm-eabi objects.
The above answer is given on the assumption that you talk to the bootloader code in your main application. Given that the interface may be very simple (just a function call with two parameters), it's possible that it might work. It'd be an interesting experiment to try. However, it is not ** guaranteed** to work.
Please keep in mind you do not have to use EABI. You can generate an arm-elf toolchain with gcc 4.6 just as well as with older versions. Since you're using a binary toolchain on windows, you may have more of a challenge. I'd suggest investigating crosstool-ng, which works quite well on Linux, and may work okay on cygwin to build the appropriate toolchain.
There is always the option of making the call to bootloader in inline assembly, in which case you can adhere to any calling standard you need :).
However, besides the portability issue it introduces, this approach will also make two assumptions about your bootloader and application:
you are able to detect in your app that a particular device has a bootloader built with your non-EABI toolchain, as you can only call the older type bootloader using the assembly code.
the two parameters you mentioned are used as primitive data by your bootloader. Should the bootloader use them, for example, as pointers to structs then you could be facing issues with incorrect alignment, padding and so forth.
I Think that this will be OK. I did a migration something like this myself, from what I remember I only ran into a problem to do with handling division.
This is the best info I can find about the differences, it suggests that if you don't have struct alignment issues, you may be OK.

Windows GNU compiler suite without external dependencies

Are there any free, GCC-compatible suites for Windows that generate standalone executables without external dependencies?
Here are a few that do not fit the bill, ordered by undesirability, least to most:
MinGW (MSVCRT.DLL)
Cygwin (Cygwin runtime DLLs)
DJGPP (NTVDM.EXE; not present on x64 platforms)
Right now I'm leaning towards (and using, albeit tentatively,) MinGW, as it does seem to be the "cleanest" approach. I still am not thrilled with the MSVCRT.DLL dependency, especially as I can and do have to deal with customers running pre-Win2K. (Windows 2000 was the first edition to ship with MSVCRT.DLL) Distributing MSVCRT with the application is not an option.
P.S.: I am aware that there is an attempt to create an MSVCRT replacement for MinGW, but it is still unstable/beta, and has limited functionality; not something I'd feel comfortable using for production applications.
P.P.S.: Answers to the effect of "MSCVRT is usually there anyway," or "Just package the redist" are not constructive answers. The question specifically asks how to AVOID dependencies, not ensure their presence.
To avoid MSVCRT with MinGW, use the following flags for the linker:
-nostdlib -Wl,--exclude-libs,msvcrt.a -Wl,-eWinMain
Notice that you have to declare a function named WinMain (you can also choose another name for it) which will be your main. You also can't use any of the standard functions like strlen, printf and friends. Instead, you must use the WinAPI equivalents like lstrcmp, wsprintf, etc.
You can see an example of this using SCons at:
https://sourceforge.net/p/nsis/code/6160/tree/NSIS/trunk/SCons/Config/gnu
I've used this for my project that also requires Windows 9x compatibility. This also has the nice side effect of having smaller executables. From your comments above, it seems you're looking for that too. If that's the case, there are even more tricks you can use in the file I linked above.
Microsoft has a table matching CRT functions to WinAPI at the following KB99456:
Win32 Equivalents for C Run-Time Functions (Web Archive)
More information on getting rid of CRT (although for VC, it can still help) at:
http://www.catch22.net/tuts/win32/reducing-executable-size

How can I compile object code for the wrong system and cross compiling question?

Reference this question about compiling. I don't understand how my program for Mac can use the right -arch, compile with those -arch flags, the -arch flags be for the system I am on (a ppc64 g5), and still produce the wrong object code.
Also, if I used a cross compiler and was on Linux, produced 10.5 code for mac, how would this be any different than what I described above?
Background is that I have tried to compile various apache modules. They compile with the -arch ppc, ppc64, etc. I get no errors and I get my mod_whatever.so. But, apache will always complain that some symbol isn't found. Apparently, it has to do with what the compiler produces, even though the file type says it is for ppc, ppc64, i386, x_64 (universal binary) and seems to match all the other .so mods I have.
I guess I don't understand how it could compile for my system with no problem and then say my system can't use it. Maybe I do not understand what a compiler is actually giving me.
EDIT: All error messages and the complete process can be seen here.
Thank you.
Looking at the other thread and elsewhere and without a G5 or OSX Server installation, I can only make a few comments and suggestions but perhaps they will help.
It's generally not a good idea to be modifying the o/s vendor's installed software. Installing a new Apache module is less problematic than, say, overwriting an existing library but you're still at the mercy of the vendor in that a Software Update could delete your modifications and, beyond that you have to figure out how the vendor's version was built in the first place. A common practice in the OS X world is to avoid this by making a completely separate installation of an open source product, like Apache, using, for instance, MacPorts. That has its cons, too: to achieve a high-level of independence, MacPorts will often download and build a lot of dependent packages for things which are already in OS X but there's no harm in that other than some extra build cycles and disk space.
That said, it should be possible to build and install apache modules to supplement those supplied by Apple. Apple does publish the changes it makes to open source products here; you can drill down in the various versions there to find the apache directory which contains the source, Makefile and applied patches. That might be of help.
Make sure that the mod_*.so you build are truly 64-bit and don't depend on any non-64 bit libraries. Use otool -L mod_*.so to see the dynamic libraries that each references and then use file on those libraries to ensure they all have ppc64 variants.
Make sure you are using up-to-date developer tools (Xcode 3.1.3 is current).
While the developer tool chain uses many open source components, Apple has enhanced many of them and there are big differences in OS X's ABIs, universal binary support, dynamic libraries, etc. The bottom line is that cross-compilation of OS X-targeted object code on Linux (or any other non-OS X platform) is neither supported nor practical.

Resources