I have a project compiled using __cdecl calling convention (msvc2010) and I compiled boost using the same compiler using the default settings.
The project linked with boost but I at runtime I got an assert message like this:
File: ...\boost\boost\program_options\detail\parsers.hpp
Line: 79
Run-Time Check Failure #0 - The value of ESP was not properly saved across a function call. This is usually a result of calling a function declared with one calling convention with a function pointer declared with a different calling convention.
There are the following questions:
what calling convention does boost build with by default on Windows (msvc2010)
how to I compile boost with __cdecl calling convention
why boost wasn't able to prevent linking with code with different calling conventions? I understood that boost has really smart library auto-inclusion code.
Update #1
It looks that boost does compile and link with proper calling convention, still at runtime I get the above problem. I did a sample application using the same code and it works but in my application it fails. The only difference could be from project configuration or includes/stdafx.h
Just use
bjam ... **cxxflags=/Zp4**
while building boost libraries.
As far as I know there's not way to make C++ use cdecl calling conventions (see MSDN Calling Convention). The C++ method calling is just different from C. The only opportunity that you have to use one of the C calling conventions is for functions, which include class static functions in C++. If you know that's the case you can try forcing the option when building by adding the option during the build:
bjam cxxflags=/Gd ...
(see BBv2 Builtin features)
Or to make it "permanent" set up a user-config.jam with your compiler and add it to the build options for all BBv2 msvc builds (see BBv2 Configuration and related docs). As for you other questions:
Boost uses the default calling convention MSVC uses, except for cases where it overrides it at the code level. I don't know where those are as they are library specific. So you'd have to search the code for the "__*" code decorators.
See above for partial answer.
Detection; there are two reasons: There is a limit to how many different options we can reasonably detect for for building as it's an exponential growth of different possible variations so we limit it to the most important cases. And in the case of calling convention, it's not actually possible since it's something that can be changed on a per function basis.
I found the cause of the problem inside one of the shared property files: <StructMemberAlignment>4Bytes</StructMemberAlignment>
If I remove it the code will work. Still, I'm not sure why this is happening and how could I solve it without removing the above code (that was required by another library).
I added another question regarding boost and structure member alignment.
Related
I was trying to compile an openMP example and he refuses to compile saying "undefined reference to 'OSCR_init', undefined reference to `OSCR_getarg_int' and several other functions. Then I located these functions in the file OmpSCR.h, that came in another folder, searched inside it and saw that these funcions were defined externally, I believe that in omp.h. I included the file with "include " in the example source (OmpSCR.h was already included) hoping that it would solve the question, but nothing improved. I do have omp.h, it came with the os. Can it be a version conflict? I got the example file from OMPSCR_v2.0.tar.gz What should I do?
An "undefined reference" error means that no definition of the function was found at link time. A declaration in a header (such as omp.h) doesn't provide an implementation for the function; it just tells the compiler that the function exists somewhere. You have to link your program with a library that actually provides the function's implementation.
Basically, you just need to link your program to an OpenMP library. The way to do this depends on which compiler and which OpenMP implementation you're using, neither of which you've specified, so I can't provide specifics. (But if you happen to be using GCC, you should use the -fopenmp option for both compiling and linking.)
I am aware that implicitly linking to libraries at load time can lead to performance increases and as such I was wondering if it was good practice to link in this way at compile time thus increasing executable size (admittedly this is only marginal) compared to linking explicitly at runtime. My question is when linking against Microsoft Windows dll files located in System32, is it 'better' to link at load time as you can be mostly certain that the libraries will be present or follow the explicit approach?
Language used is Delphi (pascal) and the library in question is the WTsAPI32.dll - Terminal Services.
EDIT: As pointed out - my choice of language was incorrect and has been amended. Also, due to having only really every extensively linked to libraries in Unix, my comments about executable size can be omitted, I believed at the time I WAS in fact referring to static linking which bundles the library code into the executable and I now realise this is impossible when using dll files (DUH!). Thanks all.
The two forms of DLL linking are perhaps better named implicit and explicit. Implicit linking is what you refer to as static linking. And explicit linking is what you refer to as runtime linking
For implicit linking the linker writes entries into the import table of the executable file. This import table is metadata that is used by the loader to resolve DLL imports at module load time. A stub function is included for each implicit import that is only a few bytes in size. The executable size implications of implicit linking are negligible.
With explicit linking the imported function's address is resolved by a call to GetProcAddress. This call is made when the programmer chooses. If the DLL or the function cannot be resolved, the programmer can code fall back behaviour. There are size implications to explicit linking that I estimate to be similar to implicit linking. If the function address is evaluated once and remembered between calls then the performance characteristics are similar to implicit linking.
My advice is as follows:
Prefer implicit linking. It is more convenient to code.
If the DLL may not be present, use explicit linking.
If the DLL must be loaded using a full path, use explicit linking.
If you want to unload the DLL during program execution, use explicit linking.
You specifically mention Windows DLLs. You can safely assume that they will be present. Don't try to code to allow your program to run in case user32.dll is missing. Some functions may not be present in older versions of Windows. If you support those older versions you'll need to use explicit linking and provide a fallback. Decide which version you support and use MSDN to be sure that a function is available on your minimum supported platform.
If your only two options are static linking and run-time dynamic linking, then the latter is the best choice for linking with Windows DLLs because it's your only choice. You cannot link statically to a DLL because DLLs are exclusively for dynamic linking; that's what the D stands for. Microsoft does not provide static libraries for the OS modules, so you cannot link to them statically.
But those typically aren't your only two options. There's a third, namely load-time dynamic linking.
In Delphi, you use load-time dynamic linking by marking a function declaration external and specifying the name of the DLL where the function resides. If you use the function, then an entry is created in your module's import table, and when the OS loads your module, it reads the table, loads the referenced DLL, looks up the address of the function, and stores the address in your program's memory image so that your program can call it directly.
You use run-time dyanmic linking by declaring a function pointer, and then using LoadLibrary and GetProcAddress to look up the function's address prior to calling it. In newer Delphi versions, you can also declare a function in the same style that load-time dynamic linking uses, but then mark it with delay. In that case, the Delphi run-time library will call LoadLibrary and GetProcAddress on your behalf the first time you call the function.
The size differences are negligible. Run-time dynamic linking requires your program to contain code to load and link to libraries, but load-time dynamic linking stores more function references in the import table.
Run-time dynamic linking offers more flexibility in the face of uncertain DLL availability. With load-time dynamic linking, if a DLL is missing, or if it doesn't have all the functions mentioned in your import table, then the OS will fail to load your program — none of your code will run. With run-time dynamic linking, however, you have the opportunity to recover from the problem. You can disable certain parts of your program that the missing DLL depends on, or you can search for DLLs in non-standard places, or you can provide alternative implementations of missing functions.
If the functions you're calling are integral to your program's ability to operate, and there's ample reason to expect the functions to be present wherever your program is installed, then you should choose to link at load time. It allows you to write simpler code. You can be confident that you'll have the required functions if they are available on a certain version of windows that you check for in your installer, or if they're provided by DLLs that you distribute with your program.
On the other hand, if the functions you're calling are optional, then you should prefer to link at run time. Use that for loading plug-ins, or for taking advantage of advanced OS features while maintaining backward compatibility. (For example, you might want to take advantage of Windows Vista theme support when it's present, but still allow your program to run on Windows XP.)
Why do you think that compile-time linking to dynamic libraries would increase EXE size ? I believe you are mislead by somewhat poor choice of terms, used in windows programming from far ago. Let us better use relative terms "early binding" and "late binding" instead for the choice who should search for procedure names, compiler/loader or programmer's custom code.
Using early binding (aka static linking against dynamic library) your EXE contains the values (in a special tables):
DLL1 Name:
procedure "aaaaa" into the variable $1234
procedure "bbbbb" into the variable $5678
.
DLL2 Name:
procedure "ccccc" into the variable $4567
...et cetera.
Now, when you turn this into runtime loading (dynamic linking against dynamic libraries) it would look like
VarH1 := SafeLoatLibrary(DLL1 Name);
if Error-Loading-DLL then do-error-handling;
Var1234 := GetProcAfdress(VarH1, "aaaaa");
if Error-Searching-For-Function then do-error-handling;
Var5678 := GetProcAfdress(VarH1, "bbbbb");
if Error-Searching-For-Function then do-error-handling;
et cetera.
Obviously in the latter case your EXE contains all those values like in the 1st case, but more so - it contains a lot of code to deal with those values, that was just absent before.
So, while EXE size difference is not really large for today memory sizes, it is still in favor of early binding (static compilation against dynamic library).
Then what are the benefits for late binding? For example you can load different DLLs from different paths, determined in runtime by configuration - the flexibility and avoiding of DLL Hell (funny, concept of avoiding DLL Hell is against concept of volume saving). You can make your application work with limited functionality, if DLL load failed while statically binded EXE would just not load - graceful degradation concept. And at least you may give user much better, full of semantics, error messages than Windows could ever do.
And the last word, where you got that concept of EXE size from. I believe you mistaken it from talks about - attention! - static linking against static libraries. That is when OBJ/LIB/DCU files are not the part of distribution, but are just temporary code containers, that ultimately takes its place inside the monolythic EXE. Then yes - then your EXE has all those libraries insideitself and thus grows larger. However this case have nothing about dynamic libraries - DLLs.
The wording chosen once ago overuses static/dynamic terms in two closely related topics: how the library is loaded (compile-time vs runtime) and how functions inside the library are located (or bound. By developer's custom codeing ro by some OS-provided or compiler-provided toolset way before 1st line of your sources started execution).
Due to that ambiguity those close but different concepts start overlapping and sometimes this leads to a total confusion.
Now, what more static linking may give you in modern Windows versions. That is WinSxS folder Novadays Windows tends to keep multiple versions of each system DLL and your program may ask for the specific version of it (while in System32 folder there would be the most recent version that your program may be not get used to. Then you can make a special MANIFEST resource and compile it into EXE asking windows to load not DLLs not be name, but by name+version instead. You can replicaty that functionality with dynamic loading as well, but using Windows-provided toolset it is much easier.
Now you can decide which of those options do or do not have importance for your particular case and make somewhat better informed choice.
HTH.
I am giving Gwan a whirl.
Having made it through example code, I started a small project with more than one source file. I now have two problems:
I got a linking error at server startup:
Linking main.cpp: undefined symbol: _ZN7GwanUrl9concatAllEv
(the main file #includes the two other files; all the files are in the csp directory)
As an alternative to having all the files in the /csp directory, I would like to make a library outside of the /csp directory while still using some of the gwan functions. sadly, a tonne of errors follow -- WHEN I GCC from commandline not via G-WAN Startup.
In file included from /home/ec2-user/gwan/include/gwan.h:22,
from Xbufstream.h:10,
from Xbufstream.cpp:10:
/usr/include/time.h:199: error: ‘size_t’ does not name a type
.....
Anyone knows what the gwan g++ argument string looks like?
(odd the 1. and 1. its 1. and 2. in the editor)
First, this is not a linker issue: you have "undefined symbol" rather than "unresolved symbol" as an error.
This is simply an #include issue.
define the main() function in your script.cpp file.
there's a G-WAN folder dedicated to user-defined include files called /gwan/include but you can as well use /csp/my_include.hpp... if you are using the right syntax:
For example, having #include "toto.hpp" in /csp/hello.cpp lets me reach C++ functions defined and implemented in the gwan/include/toto.hpp file (or defined in toto.hpp and implemented in a pre-compiled library linked to your script with #pragma link).
If you rather use #include <toto.hpp> then the SYSTEM INCLUDE PATH will be searched instead (and this will work providing that your library was correctly installed).
If you want to use #include "toto.hpp" for a custom folder that was not setup in the system, you can use G-WAN's #pragma include "../my_folder" directive to specify its PATH or you can explicitely specify it in each include: #include "../my_folder/toto.hpp".
Nothing fancy there, only C/C++ dependancy rules apply (and G-WAN really helps by providing alternate ways that do not involve system settings).
For libraries (see the G-WAN examples for SQLite, Cairo, mySQL, cURL, etc.) you can either use pre-installed libraries that exported their location in SYSTEM variables... or put your library in the /gwan/libraries folder and their include file in the /gwan/include folder.
When writing your own libraries, remember that they need to be pre-compiled. This means that you obviously cannot use G-WAN symbols since your compiler may #include "gwan.h" (to have the definitions) but your linker will not know from where G-WAN symbols can be found. The way around is to always use the G-WAN API from the G-WAN scripts. Your custom libraries must either be general-purpose or buffer any payload intended to be used by G-WAN. No-double copy is needed since G-WAN provides the set_reply() call to let G-WAN use persistent replies built without the reply xbuffer provided by G-WAN servlets.
Now, a last word about linking (which was not the cause of your trouble but could participate to the confusion). If you mix C and C++, use extern C {} to wrap your C++ prototypes called from C (otherwise you will really have "unresolved symbols").
With all this information, you should be ready to face every possible situation.
the issue of referencing gwan.h symbols inside #include files can also be solved by moving all code into the header file, whether its .h or .hpp
its ungraceful but a fix nevertheless. and good enough for the simple extension i wanted.
looking into the /libraries/sqlite3/sqlite.h helped.
#gil, thanks for your time.
I am getting LNK2001 errors when trying to use Crypto++. The official advice for this is:
There are two ways you can deal with this, either change Crypto++ to export those classes, by using the CRYPTOPP_DLL macro, or link with both the DLL export library and a static library that contains the non-DLL classes and functions. The latter can be built by using the "DLL-Import" configuration of the cryptlib project.
It would be preferable to use the first option, and given that I am not experienced in using Visual Studio, I cannot find the location and execution method of the macro.
In short: Where do I find the macro and how do I execute it?
Cheers.
In short: Where do I find the macro and how do I execute it?
The macro is CRYPTOPP_IMPORTS. You use it when performing dynamic linking on Windows (i.e., the Crypto++ DLL).
You can 'execute' it in one of two ways. First, you can add #include <cryptopp/dll.h> to your stdafx.h. dll.h. defines it, and dll.h must be included before any other Crypto++ defines. Second, add it to your project's preprocessor macros. In either case, CRYPTOPP_IMPORTS will be defined.
I suspect you have a different error, though. You're probably not including the Crypto++ library (for static linking) or Crypto++ import lib (for dynamic linking) in your project.
When one should implicitly or explicitly link to a DLL and what are common practices or pitfalls?
It is fairly rare to explicitly link a DLL. Mostly because it is painful and error prone. You need to write a function pointer declaration for the exported function and get the LoadLibrary + GetProcAddress + FreeLibrary code right. You'd do so only if you need a runtime dependency on a plug-in style DLL or want to select from a set of DLLs based on configuration. Or to deal with versioning, an API function that's only available on later versions of Windows for example. Explicit linking is the default for COM and .NET DLLs.
More background info in this MSDN Library article.
I'm assuming you refer to linking using a .lib vs loading a DLL dynamically using LoadLibrary().
Loading a DLL statically by linking to its .lib is generally safer. The linking stage checks that all the entry points exist in compile time and there is no chance you'll load a DLL that doesn't have the function you're expecting. It is also easier not to have to use GetProcAddress().
So generally you should use dynamic loading only when it is absolutely required.
I agree with other who answered you already (Hans Passant and shoosh). I want add only two things:
1) One common scenario when you have to use LoadLibrary and GetProcAddress is the following: you want use some new API existing in new versions of Windows only, but the API are not critical in your application. So you test with LoadLibrary and GetProcAddress whether the function which you need exist, and use it in the case. What your program do if the functions not exist depend total from your implementation.
2) There are one important options which you not included in your question: delayed loading of DLLs. In this case the operating system will load the DLL when one of its functions is called and not at the application start. It allows to use import libraries (.lib files) in some scenarios where explicitly linking should be used at the first look. Moreover it improve the startup time of the applications and are wide used by Windows itself. So the way is also recommended.