I am working on an MPI application for scientific computing. I am working on our distribution process, and it raise the question of the version of the MPI.
I read at some places on the net that MPI should be built with the same compiler suite as the application, but nothing really explaining why.
So I wonder what defines a version of MPI, once it has been compiled? Is it just the version of the source package, or is it the version of the source package plus the compiler suite and its version that has been used to compile MPI?
The version of the MPI implantation is an absolute must. A typical example being an type having changed between versions.
You should also compile all involved libraries - particularly MPI - with the same compiler version. A typical example are compiler-specific libraries. One compiler replaces a call to memcpy with a built-in optimized version. If you try to link with a different compiler, that doesn't provide that optimized function in it's library, you get a linker error. System libraries are usually fine, but for highly optimized builds as well as C++ codes, these kinds of library ABI (application binary interface) incompatibilities are common and very painful.
Related
If I build a DLL with Rust language, does it require libgcc*.dll to be present on run time?
On one hand:
I've seen a post somewhere on the Internet, claiming that yes it does;
rustc.exe has libgcc_s_dw2-1.dll in its directory, and cargo.exe won't run without the dll when downloaded from the http://crates.io website;
On the other hand:
I've seen articles about building toy OS kernels in Rust, so they most certainly don't require libgcc dynamic library to be present.
So, I'm confused. What's the definite answer?
Rust provides two main toolchains for Windows: x86_64-pc-windows-gnu and x86_64-pc-windows-msvc.
The -gnu toolchain includes an msys environment and uses GCC's ld.exe to link object files. This toolchain requires libgcc*.dll to be present at runtime. The main advantage of this toolchain is that it allows you to link against other msys provided libraries which can make it easier to link with certain C\C++ libraries that are difficult to under the normal Windows environment.
The -msvc toolchain uses the standard, native Windows development tools (either a Windows SDK install or a Visual Studio install). This toolchain does not use libgcc*.dll at either compile or runtime. Since this toolchain uses the normal windows linker, you are free to link against any normal Windows native libraries.
If you need to target 32-bit Windows, i686- variants of both of these toolchains are available.
NOTE: below answer summarizes situation as of Sep'2014; I'm not aware if it's still current, or if things have changed to better or worse since then. But I strongly suspect things have changed, given that 2 years have already passed since then. It would be cool if somebody tried to ask steveklabnik about it again, then update below info, or write a new, fresher answer!
Quick & raw transcript of a Rust IRC chat with steveklabnik, who gave me a kind of answer:
Hi; I have a question: if I build a DLL with Rust, does it require libgcc*.dll to be present on run time? (on Windows)
I believe that if you use the standard library, then it does require it;
IIRC we depend on one symbol from it;
but I am unsure.
How can I avoid using the standard library, or those parts of it that do? (and/or do you know which symbol exactly?)
It involves #[no_std] at your crate root; I think the unsafe guide has more.
Running nm -D | grep gcc shows me __gc_personality_v0, and then there is this: What is __gxx_personality_v0 for?,
so it looks like our stack unwinding implementation depends on that.
I seem to recall I've seen some RFCs to the effect of splitting standard library, too; are there parts I can use without pulling libgcc in?
Yes, libcore doesn't require any of that.
You give up libstd.
Also, quoting parts of the unsafe guide:
The core library (libcore) has very few dependencies and is much more portable than the standard library (libstd) itself. Additionally, the core library has most of the necessary functionality for writing idiomatic and effective Rust code. (...)
Further libraries, such as liballoc, add functionality to libcore which make other platform-specific assumptions, but continue to be more portable than the standard library itself.
And fragment of the current docs for unwind module:
Currently Rust uses unwind runtime provided by libgcc.
(The transcript was edited slightly for readability. Still, I'll happily delete this answer if anyone provides something better formatted and more thorough!)
I'm currently compiling an open source optimization library (native C++) supplied with makefiles for use with gcc. As I am a Windows user, I'm curious on the two options I see of compiling this, using gcc with MinGW/Cygwin or manually building a Visual Studio project and compiling the source.
1) If I compile using MinGW/Cygwin + gcc, will the resulting .lib (static library) require any libraries from MinGW/Cygwin? I.e. can I distribute my compiled .lib to a Windows PC that doesn't have MinGW/Cygwin and will it still run?
2) Other than performance differences between the compilers themselves, is there an overhead associated when compiling using MinGW/Cygwin and gcc - as in does the emulation layer get compiled into the library, or does gcc build a native Windows library?
3) If speed is my primary objective of the library, which is the best method to use? I realise this is quite open ended, and I may be best running my own benchmarks, but if someone has experience here this would be great!
The whole point of Cygwin is the Linux emulation layer, and by default (ie if you don't cross-compile), binaries need cygwin1.dll to run.
This is not the case for MinGW, which creates binaries as 'native' as the ones from MSVC. However, MinGW comes with its own set of runtime libraries, in particular libstdc++-6.dll. This library can also be linked statically by using -static-libstdc++, in which case you also probably want to compile with -static-libgcc.
This does not mean that you can freely mix C++ libraries from different compilers (see this page on mingw.org). If you do not want to restrict yourself to an extern "C" interface to your library, you most likely will have to choose a single compiler and stick with it.
As to your performance concerns: Using Cygwin only causes a (minor?) penalty when actually interacting with the OS - where raw computations are concerned, only the quality of the optimizer matters.
How several OpenMP compilers create threads? I think that this may depend on concrete compiler (GCC, ICPC and Visual C++). But maybe they use some libraries like pthread..?
Most of the implementations use pthreads, since the pthreads library is on Linux and Windows. This simplifies the implementation, since they don't have to change the implementation depending on the operating system. The one implementation (of course) that differs is Microsoft. Since they only support Windows, they use the Window's threading APIs. At least on Linux, you can check for yourself what threading model is being used, by using nm on the OpenMP run time library and seeing what the external dependencies are.
Compiling a program to bytecode instead of native code enables a certain level of portability, so long a fitting Virtual Machine exists.
But I'm kinda wondering, why delay the compilation? Why not simply compile the byte code when installing an application?
And if that is done, why not adopt it to languages that directly compile to native code? Compile them to an intermediate format, distribute a "JIT" compiler with the installer and compile it on the target machine.
The only thing I can think of is runtime optimization. That's about the only major thing that can't be done at installation time. Thoughts?
Often it is precompiled. Consider, for example, precompiling .NET code with NGEN.
One reason for not precompiling everything would be extensibility. Consider those languages which allow use of reflection to load additional code at runtime.
Some JIT Compilers (Java HotSpot, for example) use type feedback based inlining. They track which types are actually used in the program, and inline function calls based on the assumption that what they saw earlier is what they will see later. In order for this to work, they need to run the program through a number of iterations of its "hot loop" in order to know what types are used.
This optimization is totally unavailable at install time.
The bytecode has been compiled just as well as the C++ code has been compiled.
Also the JIT compiler, i.e. .NET and the Java runtimes are massive and dynamic; And you can't foresee in a program which parts the apps use so you need the entire runtime.
Also one has to realize that a language targeted to a virtual machine has very different design goals than a language targeted to bare metal.
Take C++ vs. Java.
C++ wouldn't work on a VM, In particular a lot of the C++ language design is geared towards RAII.
Java wouldn't work on bare metal for so many reasons. primitive types for one.
EDIT: As delnan points out correctly; JIT and similar technologies, though hugely benificial to bytecode performance, would likely not be available at install time. Also compiling for a VM is very different from compiling to native code.
Does openMP have a runtime (like .NET CLR on top of operating system) or just a compiler?
OpenMP doesn't really have, or need, anything like the .NET CLR. Compilers typically produce code which uses one or other of the approaches to threading already installed on the platform. There are also a few environment variables which OpenMP programs may want to use, but that hardly constitues a run-time system either.
I've never come across an OpenMP compiler which needed a separate installation of a run time system or anything like one.
EDIT: An OpenMP installation also needs to provide functions such as omp_get_thread_num which are usually packaged in a library of some sort.