#pragma comment(lib, "xxx.lib") equivalent under Linux? - gcc

I have a static library file called libunp.a, I do know I could use gcc -lunp xx to link to the library.
I could use #pragma comment(lib,"xxx.lib") to tell the Microsoft C/C++ compiler to include the library; how could I do it under Linux/GCC?

There doesn't seem to be any mention of any equivalent pragmas in the GCC manual's page on pragmas.
One reason I saw for GCC not supporting linking in source code was that sometimes, correct linking depends on link order; and this would require you to make sure that the linking order happens correctly no matter the order of compilation. If you're going to go to that much work, you may as well just pass the linker arguments on the command line (or otherwise), I suppose.

Libraries should be specified during the linking step. Such information simply
doesn't belong inside a translation unit. A translation unit can be preprocessed,
compiled and assembled even without a linking stage.
Simply because #pragma comment(lib,"xxx.lib") is in the source file does not mean the compiler consumes it. In fact, it goes in as a comment and is subsequently used by the linker. Not much different than *nix.

Use this GCC flag to generate an error for unknown pragmas. It will quickly tell you if the compiler understands it.
-Werror=unknown-pragmas

Related

CMake target_compile_features unordered_map

I'm trying to introduce using CMake to build my C++ project. When I run make I get an error :
[ 11%] Building CXX object CMakeFiles/game.dir/InputHandler.cpp.o
In file included from /usr/include/c++/5/unordered_map:35:0,
...
/usr/include/c++/5/bits/c++0x_warning.h:32:2: error: #error This file requires compiler and library support for the ISO C++ 2011 standard. This support must be enabled with the -std=c++11 or -std=gnu++11 compiler options.
#error This file requires compiler and library support...
I realise this is because CMake is not invoking C++11 when it needs to in order to enable the use of the unordered_map. After googling I know I need to use target_compile_features() in my CMakeLists.txt. But I can't find anywhere which gives me the syntax/arguments I need to use, there's just examples e.g. on the CMake page it gives :
add_library(mylib requires_constexpr.cpp)
target_compile_features(mylib PRIVATE cxx_constexpr)
But I don't need cxx_constexpr, I don't even know what that is. I need unordered_map.
Can anyone tell me the syntax I need to use please, preferably giving me some sort of reference to valid values to pass into that function.
std::unordered_map is part of the STL, it isn't a language keyword like constexpr or override, etc. CMake doesn't provide compile features for STL functionality (that would be an overwhelming task to implement!). In your case, you really just want to tell CMake that you need C++11. This article provides all the details you should need on this topic, but to summarise it for this particular question, adding the following near the top of your CMakeLists.txt file should give you what you need:
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF)

What's the relationship between "gcc linking" and "ld linking"?

It's said that linux loader is /usr/bin/ld, but usually we use gcc/g++ to link libraries and executables, we barely use "ld".
The last time I used "ld" manually was when I was learning linux assembly, the only way to generate executable is to ld a .o file to generate executable directly without any library.
My question is, is gcc/g++ containing some function wrappers of "ld", because raw "ld" is too difficult to use? Or we should never use "ld" explicitly for c/c++ program linking, because of blablabla?
gcc supplies a few default options to ld.
ld doesn't know anything about C++, or any other language. ld has no idea what libraries your code needs to link with. If you try to link your compiled C++ code with ld directly, it'll bail out on you, since ld, by itself, has no idea where it can find libstdc++, gcc's C++ runtime library. Do you use strings? vectors? Most of that is template code that gets compiled as part of your object module. But there are a still few precompiled bits, in libstdc++, that need to be linked with.
When you give your compiled code to gcc to link, gcc will be courteous enough to pass all your files along to ld, and tell ld which libraries, in addition to any ones you explicitly specify.
You can link with ld directly, if you want to, as long as you specify the same libraries and link option gcc uses. But why would you want to do that? Just use gcc to link your gcc-compiled code.
You shouldn't attempt to directly use ld to link a C++ program because you need to know the implementation detail of where the static part of the C++ runtime library is located. g++ knows these implementation details, such as where to find the file libstdc++.a. If you tried to use ld directly, you would have to supply all these "missing" static libraries.
My question is, is gcc/g++ containing some function wrappers of "ld"
That's right.
because raw "ld" is too difficult to use?
Well, not really; you could use it yourself without too much trouble, but it's convenient to manage the entire build process through a single executable, with a single suite of flags, and often with a single command.
It's also likely that you'd have to provide absolute paths to some runtime libraries (e.g. libstdc++.a) yourself if you bypassed the wrapper (though I haven't tested this).
Or we should never use "ld" explicitly for c/c++ program linking, because of blablabla?
You're free to do so if you want. The only reason people might raise their eyebrows is to ask why you're not doing it in the conventional manner. If you have a good reason to invoke ld directly, rather than going through g++ and passing through any linker flags that way, then go right ahead!

cannot get NEON intrinsics header to compile in XCode

I have some C++ code using NEON intrinsics. From what I have read, all you need to do is include arm_neon.h to your project.. And then I read that this arm_neon.h header is not actually readily available to you automatically, you have to get it from the web. So I found and added this version to my project:
http://clang.llvm.org/doxygen/arm__neon_8h-source.html
In my project's prefix.pch I added:
#import "arm_neon.h"
And when I try to build on my iPhone6 device (I am not using the simulator), I get a billion errors inside the arm_neon.h file:
Can anyone please explain to me what I am missing here?
You've been misinformed about being able to pick up an arm_neon.h from the Internet. Generally the header is not just compiler specific, but compiler version (even compiler revision) specific. For GCC it relies on a number of compiler built in function calls, and from your screenshot of Clang the same holds there. As you'd expect, if the name of these internal-only functions changes, the header will fail to compile.
What surprises me is that you're unable to use an include of whichever arm_neon.h ships with your build environment. The only thing I can think of that would cause this is the build command trying to build for x86_64 (for the simulator) but you say this isn't what is happening. It might be worth checking your build settings one more time.
If you're still not getting anywhere, remember that arm_neon.h is sometimes considered as a system header, so in C++ you might need to #include <arm_neon.h> rather than #include "arm_neon.h" to get the compiler to search the system paths.

Is there an advantage to linking a shared object library from gcc generated objects with g++?

I came across a project recently that created it's shared object libraries by linking gcc generated object files (using the CC gnu makefile macro) with g++.
Aside from (possibly) ensuring the source code is encapsulated inside #ifdef __cplusplus / extern "c" { / #endif constructs to avoid name mangling problems, is there any reason why this would be ... better?
If they only link with g++ then adding preprocessor checks and extern "C" is useless, that only affects preprocessing and compilation, at the linking stage that's already done.
They might have wanted to ensure that exceptions could propagate through their C library, but to do that they only need to link to libgcc not libstdc++.
Maybe they just wanted the shared library to depend on libstdc++, so that users of the library would also depend on libstdc++ and wouldn't have to link to it explicitly, although that might not work as they expected.
So in short, no, I can't think of any good reason if all the code is C code and not C++ code.
However just because something is compiled with gcc doesn't mean it's C code, you can use the gcc executable to compile C++ code and it will invoke the C++ front-end (cc1plus) instead of the C front-end (cc1). If the C++ code uses the standard library then you either need to link with -lstdc++ or use g++ to link (which automatically links with -lstdc++). So maybe the answer is that it's C++ code, and the fact they compiled the objects with gcc not g++ made you think it was C code.

Migrating from Winarm to Yagarto

This question must apply to so few people...
I am busy mrigrating my ARM C project from Winarm GCC 4.1.2 to Yagarto GCC 4.3.3.
I did not expect any differences and both compile my project happily using the same makefile and .ld files.
However while the Winarm version runs the Yagarto version doesn't. The processor is an Atmel AT91SAM7S.
Any ideas on where to look would be most welcome. i am thinking that my assumption that a makefile is a makefile is incorrect or that the .ld file for Winarm is not applicable to Yagarto.
Since they are both GCC toolchains and presumably use the same linker they must surely be compatable.
TIA
Ends.
I agree that the gcc's and the other binaries (ld) should be the same or close enough for you not to notice the differences. but the startup code whether it is your or theirs, and the C library can make a big difference. Enough to make the difference between success and failure when trying to use the same source and linker script. Now if this is 100% your code, no libraries or any other files being used from WinARM or Yagarto then this doesnt make much sense. 3.x.x to 4.x.x yes I had to re-spin my linker scripts, but 4.1.x to 4.3.x I dont remember having problems there.
It could also be a subtle difference in compiler behavior: code generation does change from gcc release to gcc release, and if your code contains pieces which are implementation-dependent for their semantics, it might well bite you in this way. Memory layouts of data might change, for example, and code that accidentally relied on it would break.
Seen that happen a lot of times.
Try it with different optimization options in the compile and see if that makes a difference.
Both WinARM and YAGARTO are based on gcc and should treat ld files equally. Also both are using gnu make utility - make files will be processed the same way. You can compare the two toolchains here and here.
If you are running your project with an OCD, then there is a difference between the implementation of the OpenOCD debugger. Also the commands sent to the debugger to configure it could be different.
If you are producing an hex file, then this could be different as the two toolchains are not using the same version of newlib library.
In order to be on the safe side, make sure that in both cases the correct binutils are first in the path.
If I were you I'd check the compilation/linker flags - specifically the defaults. It is very common for different toolchains to have different default ABIs or FP conventions. It might even be compiling using an instruction set extension that isn't supported by your CPU.

Resources