Object hierarchy using std::list from libc++ (bug?) - xcode

I think I found a bug in the libc++ list implementation. The following code will produce a compiler error (Field has incomlete type 'foo') when using certain build settings in xcode:
#include <list>
using namespace std;
class foo {
public:
list<foo> bar;
};
The settings are the following:
XCode Version: 4.4.1
C++ Language Dialect: C++11 or GNU++11
C++ Standard library: LLVM C++ standard library with C++ extensions (libc++11)
Using GCCs libstdc++ will resolve the error.
Not using the C++11 dialect will resolve the error.
Using vector instead of list will resolve the error.
I think it is a bug in the list implementation, but I am not sure.
Forgive my ignorance, but I don't know what I should do to resolve this issue.
Switching to vector is not an option in my project and I definitely need C++11 features. That also includes shared_ptr, but the headers are missing when using GCC. Beside that, apple does not seem to provide new versions of GCC anymore.
I would very appreciate it if somebody could recreate this issue, maybe with newer header from libc++.
Also, if updating LLVM/libc++ would resolve this issue, do you recommend it?

C++ Standard 17.6.4.8:
In certain cases (replacement functions, handler functions, operations on types used to instantiate standard library template components), the C++ standard library depends on components supplied by a C++ program. If these components do not meet their requirements, the Standard places no requirements on the implementation.
In particular, the effects are undefined in the following cases:
...
if an incomplete type (3.9) is used as a template argument when instantiating a template component,
unless specifically allowed for that component.
None of the standard library's container class templates, including list, mention any such allowance for an incomplete type. So your program is an invalid one that might happen to work with some compilers. That can't be considered a bug in the standard library implementation.

Related

Detecting ABI compatibility issues with GCC

I recently spent a fairly substantial amount of time tracking down a problem that turned out to be caused by compiling a library with -D_GLIBCXX_DEBUG (which tells libstdc++ to use a debug version of the standard library with extra checks) but compiling the client program without. This caused an ABI compatibility problem.
Is there some way I can automatically detect problems like this with GCC? Visual Studio provides the detect_mismatch pragma which I think would have served this purpose, but I'm unaware of any GCC equivalent. GCC does something with embedding a symbol name (e.g. GLIBCXX_3.4.9), and I can imagine schemes that would cause a linking error because of an undefined symbol if a corresponding symbol (e.g. mylib_debug_stl) were not present, but the only ways I can think of to get a use of that symbol are really hacky.
Alternatively, how do other people avoid this issue? Build the checked version of the library to a different name or something like that?
Is there some way I can automatically detect problems like this with GCC?
Only the linker can detect if you link incompatible code, not the compiler.
The alternative linker, gold, can detect some problems with the --detect-odr-violations option.
Alternatively, how do other people avoid this issue? Build the checked version of the library to a different name or something like that?
I just ensure I rebuild everything when I want to use the Debug Mode, I don't think I've ever wanted to keep a library around that was built with Debug Mode. It's meant for debugging, not for normal use.
I rarely use -D_GLIBCXX_DEBUG anyway, I more often do something like:
#if 0
# include <debug/vector>
namespace my_class_stl = __gnu_debug;
#else
#include <vector>
namespace my_class_stl = std;
#endif
struct my_class
{
typedef my_class_stl::vector<int> container;
typedef container::iterator iterator;
// ...
};
Then I change the preprocessor condition when I want to use a Debug Mode vector for that specific class, without affecting every container in the program. Because the change involves writing to the file (and so updating its timestamp) anything that depends on that header will get rebuilt by make, and there are two distinct types, std::vector<int> and __gnu_debug::vector<int>, which have different symbols and can't be confused by the linker.
Just defining _GLIBCXX_DEBUG doesn't cause all dependencies to be rebuilt, and silently alters the definition of std::vector globally, rather than changing specific containers to a different type with a different name, __gnu_debug::vector
turned out to be caused by compiling a library with -D_GLIBCXX_DEBUG (which tells libstdc++ to use a debug version of the standard library with extra checks) but compiling the client program without.
It is an explicit libsdc++ debugging mode design goal to support such configuration, and I somewhat doubt that that was the actual cause of your problem.
The problem may have disappeared after you rebuilt the library without -D_GLIBCXX_DEBUG, but that doesn't prove that the ABI incompatibility was the root cause.

Is there a difference between using properties declared in a class extension and associated storage?

With Objective-C, you can add iVars/properties to a class using the associated object support in the runtime.
With LLVM 2, you can now add iVars/properties to a class by declaring them in a class extension.
Is there a difference between the two? I have a feeling that LLVM just wraps the runtime support, but I'm not sure.
I believe these are two different mechanisms.
Under the fragile ABI the associated object support was the only way to extend interface, it works (I believe) at runtime by allocating additional list of associated objects.
But now with LLVM 2 you can declare ivars in class extensions, but it works only under the non-fragile ABI (try to compile that code for 32bit Leopard with fragile ABI, and you'll catch syntax errors).
Here's an article explaining how the non-fragile ABI works. It requires both the compile time and runtime support.

Is it possible to compile/link to occi with gcc on HPUX?

We have Oracle 11 running on HP-UX 11.31 and gcc 4.4.3. It seems that there is no way to link to occi, because it was built with aCC. Is there any workaround for this?
I had the silly idea that I could somehow build a library that basically proxied the connection - build the library with aCC in some way that could be linked to by gcc. Is this possible?
No, there isn't a way around that.
Different C compilers have interchangeable code using a standard ABI. You can mix and match their object code more or less with impunity.
However, different C++ compilers have a variety of different conventions that mean that their object code is not compatible. These relate to class layout (especially in multiple inheritance hierarchies and the dreaded 'diamond-of-death'), but also in name mangling conventions and exception handling. The name mangling schemes are deliberately made different so that you cannot accidentally link objects from one compiler with another.
Generally, if libraries are built using a C++ compiler, you have to link your code using the same - or at least a compatible - C++ compiler. And that almost invariably means a compiler from the same family. For example, you might be able to use G++ 4.5.0 even if the code was built with G++ 4.4.2. However, you won't be able to mix aCC with G++.

How do I compile boost using __cdecl calling convention?

I have a project compiled using __cdecl calling convention (msvc2010) and I compiled boost using the same compiler using the default settings.
The project linked with boost but I at runtime I got an assert message like this:
File: ...\boost\boost\program_options\detail\parsers.hpp
Line: 79
Run-Time Check Failure #0 - The value of ESP was not properly saved across a function call. This is usually a result of calling a function declared with one calling convention with a function pointer declared with a different calling convention.
There are the following questions:
what calling convention does boost build with by default on Windows (msvc2010)
how to I compile boost with __cdecl calling convention
why boost wasn't able to prevent linking with code with different calling conventions? I understood that boost has really smart library auto-inclusion code.
Update #1
It looks that boost does compile and link with proper calling convention, still at runtime I get the above problem. I did a sample application using the same code and it works but in my application it fails. The only difference could be from project configuration or includes/stdafx.h
Just use
bjam ... **cxxflags=/Zp4**
while building boost libraries.
As far as I know there's not way to make C++ use cdecl calling conventions (see MSDN Calling Convention). The C++ method calling is just different from C. The only opportunity that you have to use one of the C calling conventions is for functions, which include class static functions in C++. If you know that's the case you can try forcing the option when building by adding the option during the build:
bjam cxxflags=/Gd ...
(see BBv2 Builtin features)
Or to make it "permanent" set up a user-config.jam with your compiler and add it to the build options for all BBv2 msvc builds (see BBv2 Configuration and related docs). As for you other questions:
Boost uses the default calling convention MSVC uses, except for cases where it overrides it at the code level. I don't know where those are as they are library specific. So you'd have to search the code for the "__*" code decorators.
See above for partial answer.
Detection; there are two reasons: There is a limit to how many different options we can reasonably detect for for building as it's an exponential growth of different possible variations so we limit it to the most important cases. And in the case of calling convention, it's not actually possible since it's something that can be changed on a per function basis.
I found the cause of the problem inside one of the shared property files: <StructMemberAlignment>4Bytes</StructMemberAlignment>
If I remove it the code will work. Still, I'm not sure why this is happening and how could I solve it without removing the above code (that was required by another library).
I added another question regarding boost and structure member alignment.

Runtime Issues While Mixing Libraries from Different Versions of Visual Studio

I have encountered an unexpected Access Error while running a project I've built using two different versions of Visual Studio. My general configuration is as follows:
LibA is a static lib, static runtime linkage, msvc 8.0
LibB is a static lib, static runtime linkage, msvc 9.0
My target project for integration is a msvc 9.0 COM dll, which statically links the above libraries
This project builds, but crashes up at runtime with an access violation in some STL code. The stack seems to indicate that I've passed through headers both versions (8 and 9) during a call into a stream insertion operator. I realize that this is a problem.
Somehow, this call:
ost << std::dec << port_; //(originating from an object in LibA)
...descends through the following stack trace:
std::basic_ostream::operator<<(...) (ostream:283, msvc 8.0 version <-- expected, since LibA was built with this version)
std::num_put::put(...) (xlocnum:888, msvc 8.0 version <-- expected, since LibA was built with this version)
std::num_put::do_put(...) (xlocnum:1158, msvc 9.0 version!! !##$!%! <-- not expected, since LibA was built with msvc 8.0)
std::ios_base::flags() (xiosbase:374, msvc 9.0 version <-- follows from above)
The access violation happens in std::ios_base::flags(). I suspect it is due to the mix of implementations in the call stack (although I am not sure).
My questions are.
1.) Is the likely cause of this access violation the mixing of msvc header implementations?
2.) Is there a way to prevent these implementations from mixing?
3.) Is there a better way to configure these three projects for integration (assuming moving LibA from msvc 8.0 is undesirable)?
I am aware of the ideas raised in this question and this one. Here I am most interested in this specific problem, and if there is some way to avoid it.
Any insights would be appreciated.
You can't use different STL implementation in the same project. This means even different versions from the same compiler. If your LibA has a function that accepts std::vector as an argument you are only allowed to pass vector object from STL that LibA was built with. This is why many C++ libraries expose only C API.
Either you change your API or you rebuild all your projects using the same compiler.
You are doing something that you shouldn't. You are in the world of undefined behavior. There is no point in trying to debug this particular crash. Even if you managed to make this line work you would get a new crash somewhere else.
There is no guarantee of library binary compatibility between major versions of MSVC. STL code is mostly template code that gets expanded into your code. So you're static libraries probably have incompatible chunks of STL code inside of them.
In general, this shouldn't be a problem, unless that STL code is part of the interface to the library. For example, if you pass iterators or a reference to a vector from one library to another, you're in trouble.
The best solution is to build everything with the same version of the compiler. If you can't do that (e.g., if the one of the libraries is from a third-party), you're probably stuck.

Resources