C++ shared library symbols versioning - gcc

I'm trying to create library with two versions of the same function using
__asm__(".symver ......
approach
library.h
#ifndef CTEST_H
#define CTEST_H
int first(int x);
int second(int x);
#endif
library.cpp
#include "simple.h"
#include <stdio.h>
__asm__(".symver first_1_0,first#LIBSIMPLE_1.0");
int first_1_0(int x)
{
printf("lib: %s\n", __FUNCTION__);
return x + 1;
}
__asm__(".symver first_2_0,first##LIBSIMPLE_2.0");
int first_2_0(int x)
{
int y;
printf("lib: %d\n", y);
printf("lib: %s\n", __FUNCTION__);
return (x + 1) * 1000;
}
int second(int x)
{
printf("lib: %s\n", __FUNCTION__);
return x + 2;
}
And here is the version scripf file
LIBSIMPLE_1.0{
global:
first; second;
local:
*;
};
LIBSIMPLE_2.0{
global:
first;
local:
*;
};
When build library using gcc, everything works well, and i am able to link to a library binary. Using nm tool i see that both first() and second() function symbols are exported.
Now, when i try to use g++, non of the symbols are exported.
So i tried to use extern "C" directive to wrap both declarations
extern "C" {
int first(int x);
int second(int x);
}
nm shows that second() function symbol is exported, but first() still remain unexported, and mangled.
What is here i am missing to make this to work? Or it is impossible with the c++ compiler to achieve this?

I don't know why, with 'extern "C"', 'first' was not exported - suspect there is something else interfering.
Otherwise C++ name mangling is certainly a pain here. The 'asm' directives (AFAIK) require the mangled names for C++ functions, not the simple 'C' name. So 'int first(int)' would need to be referenced as (e.g.) '_Z5firsti' instead of just 'first'. This is, of course, a real pain as far as portability goes...
The linker map file is more forgiving as its supported 'extern "C++" {...}' blocks to list C++ symbols in their as-written form - 'int first(int)'.
This whole process is a maintainance nightmare. What I'd really like would be a function attribute which could be used to specify the alias and version...
Just to add a reminder that C++11 now supports inline namespaces which can be used to provide symbol versioning in C++.

Related

complex and double complex methods not working in visual studio c project

The below code is throwing error in Visual Studio C project. That same code is working in Linux with GCC compiler. Please let me know any solution to execute properly in Windows.
#include<stdio.h>
#include <complex.h>
#include <math.h>
typedef struct {
float r, i;
} complex_;
double c_abs(complex_* z)
{
return (cabs(z->r + I * z->i));
}
int main()
{
complex_ number1 = { 3.0, 4.0 };
double d = c_abs(&number1);
printf("The absolute value of %f + %fi is %f\n", number1.r, number1.i, d);
return 0;
}
The error I am getting was
C2088: '*': illegal for struct
Here what t I observed the I macro not working properly...
So is there any other way we can handle in Windows?
error C2088: '*': illegal for struct
This is the error MSVC returns when compiling the code (as C) for this line.
return (cabs(z->r + I * z->i));
The MSVC C compiler does not have a native complex type, and (quoting the docs) "therefore the Microsoft implementation uses structure types to represent complex numbers".
The imaginary unit I a.k.a. _Complex_I is defined as an _Fcomplex structure in <complex.h>, which explains compile error C2088, since there is no operator * to multiply that structure with a float value.
Even if the multiplication worked out by some magic, the end result would be a float value being passed into the cabs call, but MSVC declares cabs as double cabs(_Dcomplex z); and there is no automatic conversion from float to _Dcomplex so the call would still fail to compile.
What would work with MSVC, however, is replace that line with the following, which constructs a _Dcomplex on the fly from the float real and imaginary parts.
return cabs(_Dcomplex{z->r, z->i});

duplicate symbol of a function defined in a header file

Suppose I have a header file file_ops.hpp that looks something like this
#pragma once
bool systemIsLittleEndian() {
uint16_t x = 0x0011;
uint8_t *half_x = (uint8_t *) &x;
if (*half_x == 0x11)
return true;
else
return false;
}
I initially thought it had something to do with the implementation, but as it turns out, I'll get duplicate symbols with just
#pragma once
bool systemIsLittleEndian() { return true; }
If I make it inline, the linker errors go away. That's not something I want to rely on, since inline is a request not a guarantee.
What causes this behavior? I'm not dealing with a scenario where I'm returning some kind of singleton.
There are other methods that are marked as
bool MY_LIB_EXPORT someFunc();// implemented in `file_ops.cpp`
are these related somehow (mixed exported functions and "plain old functions")? Clearly I can just move the implementation to file_ops.cpp, I'm rather intrigued as to why this happens.
If I make it inline, the linker errors go away. That's not something I want to rely on, since inline is a request not a guarantee.
It's OK to inline the function.
Even if the object code is not inlined, the language guarantees that is will not cause linker errors or undefined behavior as long as the function is somehow not altered in different translation units.
If you #include the .hpp in hundreds of .cpp files, you may notice a bit of code bloat but the program is still correct.
What causes this behavior? I'm not dealing with a scenario where I'm returning some kind of singleton.
The #include mechanism is a convenience for reducing the amount of code you have to manually create in multiple files with the exact content. In the end, all translation units that #include other files get the lines of code from the files they #include.
If you #include file_ops.hpp in, let's say, file1.cpp and file2.cpp, it's as if you have:
file1.cpp:
bool systemIsLittleEndian() {
uint16_t x = 0x0011;
uint8_t *half_x = (uint8_t *) &x;
if (*half_x == 0x11)
return true;
else
return false;
}
file2.cpp:
bool systemIsLittleEndian() {
uint16_t x = 0x0011;
uint8_t *half_x = (uint8_t *) &x;
if (*half_x == 0x11)
return true;
else
return false;
}
When you compile those two .cpp files and link them together to create an executable, the linker notices that there are two definitions of the function named systemIsLittleEndian. That's the source of the linker error.
One solution without using inline
One solution to your problem, without using inline, is:
Declare the function in the .hpp file.
Define it in the appropriate .cpp file..
file_ops.hpp:
bool systemIsLittleEndian(); // Just the declaration.
file_ops.cpp:
#include "file_ops.hpp"
// The definition.
bool systemIsLittleEndian() {
uint16_t x = 0x0011;
uint8_t *half_x = (uint8_t *) &x;
if (*half_x == 0x11)
return true;
else
return false;
}
Update
Regarding
bool MY_LIB_EXPORT someFunc();// implemented in `file_ops.cpp`
There is lots of information on the web regarding. This is a Microsoft/Windows issue. Here are couple of starting points to learn about it.
Exporting from a DLL Using __declspec(dllexport)
Importing into an Application Using __declspec(dllimport)

link functions with mismatching signature

I'm playing around with gcc and g++ compiler and trying to compile some C code within those, my purpose is to see how the compiler / linker enforces that when linking a model with some function declaration to a model with that implementation of that function, the correct function are linked ( in terms of parameters passed and values returned )
for example let's take a look at this code
#include <stdio.h>
extern int foo(int b, int c);
int main()
{
int f = foo(5, 8);
printf("%d",f);
}
after compilation within my symbol table I'd have a symbol for foo, but within the elf file format there is not place that describes the arguments taken and the function signature, ( int(int,int) ), so basically if I write some other code such as this:
char foo(int a, int b, int c)
{
return (char) ( a + b + c );
}
compile that model it'll also have some symbol called foo, what if I link these models together, what's gonna happen? I have never thought of this, and how would a compiler overcome this weakness... I know that within g++ the compiler generates some prefix for every symbol regarding to it's namespace, but does it also take in mind the signature? If anyone has ever encountered this it would be great if he could shed some light upon this problem
The problem is solved with name mangling.
In compiler construction, name mangling (also called name decoration)
is a technique used to solve various problems caused by the need to
resolve unique names for programming entities in many modern
programming languages.
It provides a way of encoding additional information in the name of a
function, structure, class or another datatype in order to pass more
semantic information from the compilers to linkers.
The need arises where the language allows different entities to be
named with the same identifier as long as they occupy a different
namespace (where a namespace is typically defined by a module, class,
or explicit namespace directive) or have different signatures (such as
function overloading).
Note the simple example:
Consider the following two definitions of f() in a C++ program:
int f (void) { return 1; }
int f (int) { return 0; }
void g (void) { int i = f(), j = f(0); }
These are distinct functions, with no relation to each other apart
from the name. If they were natively translated into C with no
changes, the result would be an error — C does not permit two
functions with the same name. The C++ compiler therefore will encode
the type information in the symbol name, the result being something
resembling:
int __f_v (void) { return 1; }
int __f_i (int) { return 0; }
void __g_v (void) { int i = __f_v(), j = __f_i(0); }
Notice that g() is mangled even though there is no conflict; name
mangling applies to all symbols.
Wow, I've kept exploring and testing it on my own and I came up with a solution which quietly amazed my mind,
so I wrote the following code and compiled it on a gcc compiler
main.c
#include <stdio.h>
extern int foo(int a, char b);
int main()
{
int g = foo(5, 6);
printf("%d", g);
return 0;
}
foo.c
typedef struct{
int a;
int b;
char c;
char d;
} mystruct;
mystruct foo(int a, int b)
{
mystruct myl;
my.a = a;
my.b = a + 1;
my.c = (char) b;
my.d = (char b + 1;
return my1;
}
now I compiled foo.c to foo.o with gcc firstly and checked the symbol table using
readelf and I had some entry called foo
also after that I compiled main.c to main.o checked the symbol table and it also had some entry called foo, I linked those two together and surprisingly it worked, I ran main.o and obviously encountered some segmentation fault, which makes sense as the actual implementation of foo as implemented in foo.o probably expects three parameters (first one should be struct adders), a parameter which isn't passed in main.o under it's definition to foo then the actual implementation accesses some memory that doesn't belong to it from the stack frame of main, then tries accessing addresses that it thought it got, and ends up with segmentation fault, that's fine,
now I compiled both models again with g++ and not gcc and what came up was amazing.. I found out that the symbol entry under foo.o was _Z3fooii and under main.o it was _Z3fooic, now my guess is that the ii suffix means int int and ic suffix means int char which probably refers to the parameters that should be passed to function hence allowing the compiler to know some function deceleration gets the actual implementation. so I changed my foo declaration in main.c to
extern int foo(int a, int b);
re-compiled and this time got the symbol _Z3fooii, I linked both models again and amazingly this time it worked, I tried running it and again encountered segmentation fault, which again also makes sense as the compiler wont always even authorize correct return values.. anyways what was my original thought - that g++ includes function signature within symbol name and thus enforces the linker to give function implementation get correct parameters to correct function declaration

Different compiler behavior with C++11

The following code
#include <vector>
#include <complex>
#include <algorithm>
template<class K>
inline void conjVec(int m, K* const in) {
static_assert(std::is_same<K, double>::value || std::is_same<K, std::complex<double>>::value, "");
if(!std::is_same<typename std::remove_pointer<K>::type, double>::value)
#ifndef OK
std::for_each(in, in + m, [](K& z) { z = std::conj(z); });
#else
std::for_each(reinterpret_cast<std::complex<double>*>(in), reinterpret_cast<std::complex<double>*>(in) + m, [](std::complex<double>& z) { z = std::conj(z); });
#endif
}
int main(int argc, char* argv[]) {
std::vector<double> nums;
nums.emplace_back(1.0);
conjVec(nums.size(), nums.data());
return 0;
}
compiles fine on Linux with
Debian clang version 3.5.0-9
gcc version 4.9.1
icpc version 15.0.1
and on Mac OS X with
gcc version 4.9.2
but not with
clang-600.0.56
icpc version 15.0.1
except if the macro OK is defined. I don't know which are the faulty compilers, could someone let me know ? Thanks.
PS: here is the error
10:48: error: assigning to 'double' from incompatible type 'complex<double>'
std::for_each(in, in + m, [](K& z) { z = std::conj(z); });
The difference is that on Linux, you're using libstdc++ and glibc, and on MacOS you're using libc++ and whatever CRT MacOS uses.
The MacOS version is correct. (Also, your workaround is completely broken and insanely dangerous.)
Here's what I think happens.
There are multiple overloads of conj in the environment. C++98 brings in a single template, which takes a std::complex<F> and returns the same type. Because this template needs F to be deduced, it doesn't work when calling conj with a simple floating point number, so C++11 added overloads of conj which take float, double and long double, and return the appropriate std::complex instantiation.
Then there's a global function from the C99 library, ::conj, which takes a C99 double complex and returns the same.
libstdc++ doesn't yet provide the new C++11 conj overloads, as far as I can see. The C++ version of conj isn't called. It appears, however, that somehow ::conj found its way into the std namespace, and gets called. The double you pass is implicitly converted to a double complex by adding a zero imaginary part. conj negates that zero. The result double complex is implicitly converted back to a double by discarding the imaginary component. (Yes, that's an implicit conversion in C99. No, I don't know what they were thinking.) The result can be assigned to z.
libc++ provides the new overloads. The one taking a double is chosen. It returns a std::complex<double>. This class has no implicit conversion to double, so the assignment to z gives you an error.
The bottom line is this: your code makes absolutely no sense. A vector<double> isn't a vector<complex<double>> and shouldn't be treated as one. Calling conj on double doesn't make sense. Either it doesn't compile, or it's a no-op. (libc++'s conj(double) is in fact implemented by simply constructing a complex<double> with a zero imaginary part.) And wildly reinterpret_casting your way around compile errors is horrible.
Sebastian Redl's answer explains why your code didn't compile with libc++ but did with libstdc++. if is not the static if that exists in some languages; even if the code in an if branch is 100% dead, it must still be valid code.
In any event, this feels like a massive amount of unnecessary complexity to me. Not everything has to be a template. Especially when your template can only be used with two types, and when used with one of those two it's a no-op.
Compare:
template<class K>
inline void conjVec(int m, K* const in) {
static_assert(std::is_same<K, double>::value || std::is_same<K, std::complex<double>>::value, "");
if(!std::is_same<K, double>::value)
std::for_each(reinterpret_cast<std::complex<double>*>(in), reinterpret_cast<std::complex<double>*>(in) + m, [](std::complex<double>& z) { z = std::conj(z); });
}
with:
inline void conjVec(int m, double* const in) {}
inline void conjVec(int m, std::complex<double>* const in) {
std::for_each(in, in + m, [](std::complex<double>& z) { z = std::conj(z); });
}
I know which one I would prefer.

gcc "not inlined" warning

Does gcc's inline __attribute__((__always_inline__)) generate warning, when compiler can't inline function?
Because VS does http://msdn.microsoft.com/en-us/library/z8y1yy88.aspx:
If the compiler cannot inline a function declared with __forceinline,
it generates a level 1 warning.
You need -Winline to get warnings about non-inlined functions.
If you want to verify this you can try taking the address of an inline function (which prevents it from being inlined) and then you should see a warning.
#include <stdio.h>
static inline __attribute__ ((always_inline)) int add(int a, int b)
{
return a + b;
}
int main(void)
{
printf("%d\n", add(21, 21));
printf("%p\n", add);
return 0;
}
EDIT
I've been trying to produce a warning with the above code and other examples without success - it seems that the behaviour of current versions of gcc and clang may have changed in this area. I'll delete this answer if I can't code up with a better example that generates a warning.

Resources