Storing pairs in a GCC rope with c++11 - c++11

I'm using a GCC extension rope to store pairs of objects in my program and am running into some C++11 related trouble. The following compiles under C++98
#include <ext/rope>
typedef std::pair<int, int> std_pair;
int main()
{
__gnu_cxx::rope<std_pair> r;
}
but not with C++11 under G++ 4.8.2 or 4.8.3.
What happens is that the uninitialised_copy_n algorithm is pulled in from two places, the ext/memory and the C++11 version of the memory header. The gnu_cxx namespace is pulled in by rope and the std namespace is pulled in by pair and there are now two identically defined methods in scope leading to a compile error.
I assume this is a bug in a weird use case for a rarely used library but what would be the correct fix? You can't remove the function from ext/memory to avoid breaking existing code and it now required to be in std. I've worked around it using my own pair class but how should this be fixed properly?

If changing the libstdc++ headers is an option (and I asked in the comments whether you were looking for a way to fix it in libstdc++, or work around it in your program), then the simple solution, to me, seems to be to make sure there is only one uninitialized_copy_n function. ext/memory already includes <memory>, which provides std::uninitialized_copy_n. So instead of defining __gnu_cxx::uninitialized_copy_n, it can have using std::uninitialized_copy_n; inside the __gnu_cxx namespace. It can even conditionalize this on C++11 support, so that pre-C++11 code gets the custom implementation of those functions, and C++11 code gets the std implementation of those functions.
This way, code that attempts to use __gnu_cxx::uninitialized_copy_n, whether directly or through ADL, will continue to work, but there is no ambiguity between std::uninitialized_copy_n and __gnu_cxx::uninitialized_copy_n, because they are the very same function.

Related

How to call C++ code in llvm ... IRBuilder?

I am using LLVM to build a simple language along the lines demonstrated in the LLVM kaleidoscope documentation. However the doc omits two important things (at least):
how to call C++ code
how to do strings
You can imagine a custom language allows constructs like,
String str("Hello World");
String str1(" and StackOverflow");
str.append(str1);
Since LLVM ships with SmallString, the obvious thing to do is treat the user type String as a llvm::SmallString and emit code for it. But how to do this? I see very little sense in creating a String class from scratch provided there's a way to invoke those C++ calls from LLVM. Would one use IRBuilder to build out the C++ calls?
Another option is to create C++ file containing the equivalent C++ code for the user provided code in terms of llvm::SmallString intermixed with inlined IR code ala the kaleidoscope demo ... and pass the whole thing to Clang/LLVM to compile down to object code. But Clang/LLVM doesn't allow intermixing this way.
Putting aside technicalities of creating static strings that go into the initializer how do people do this? Once this is sorted out for strings, most likely the approach would work for sets, hashmaps, vectors, and all the various other containers LLVM ships with.
LLVM is a strange, cool thing. The things I need are right there, and not readily accessible based on any documentation I can find.
I want a considered approach because even this simple hello world program generates 81 lines of IR code. Replace the llvm::SimpleString with a C-string and the resulting IR code is smaller almost by an order of 10:
#include <llvm/ADT/SmallString.h>
int main(int argc, char **argv)
{
llvm::SmallString<32> str("hello world\n");
printf("%s\n", str.c_str());
return 0;
}
That's a lot of code to hand assemble and we're no where close to supporting all the methods doable for strings.
Thanks.

GCC __attribute__((always_inline)) and lambdas, is this syntax correct?

I am using GCC 4.6 as part of the lpcxpresso ide for a Cortex embedded processor. I have very limited code size, especially when compiling in debug mode. Using attribute((always_inline)) has so far proven to be a good tool to inline trivial functions and this saves a lot of code bloat in debug mode while still maintaining readability. I expect it to be somewhat mainstream and supported in the future because it is mentioned here http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0348c/CIAJGAIH.html
Now to my question: Is this the correct Syntax for declaring a Lambda always inline?
#define ALWAYS_INLINE __attribute__((always_inline))
[](volatile int &i)ALWAYS_INLINE{i++;}
It does work, my question is will it continue to work in future and what can I do to ensure it works in the future. If I ever switch to another major compiler that supports c++11 will I find a similar keyword which I can replace the attribute((always_inline)) with?
If I were to meet my fairy godmother I would wish for a compiler directive which causes all lambdas which are constructed as temporaries with empty constructors and bound by reference to be automatically inlined even in debug mode. Any ideas?
Will it continue to work in future?
Likely but, always_inline is compiler specific and since there is no standard specifying its exact behavior with lambda, there is no guaranty that this will continue to work in the future.
What can I do to ensure it works?
This depends on the compiler not you. If a future version drops support for always_inline with lambda, you have to stick with a version that works or code your own preprocessor that inlines lambdas with an always_inline-like keyword.
If I ever switch to another major compiler that supports c++11 will I
find a similar keyword?
Likely but again, there is no guaranty. The only real standard is the C++ inline keyword and it is not applicable to lambdas. For non-lambda it only suggests inlining and tells the compiler that a function may be defined in different compile units.

gcc - gdb - pretty print stl

I'm currently doing some research on the STL, especially for printing the STL content during debug. I know there are many different approaches.
Like:
http://sourceware.org/gdb/wiki/STLSupport
or using a shared library to print the content of a container
What I'm currently looking for is, why g++ deletes functions, which are not used for example I have following code and use the compile setting g++ -g main.cpp -o main.o.
include <vector>
include <iostream>
using namespace std;
int main() {
std::vector<int> vec;
vec.push_back(10);
vec.push_back(20);
vec.push_back(30);
return;
}
So when I debug this code I will see that I can't use print vec.front(). The message I receive is:
Cannot evaluate function -- may be inlined
Therefore I tried to use the setting -fkeep-inline-functions, but no changes.
When i use nm main.o | grep front I see that there is no line entry for the method .front(). Doing the same again but, with an extra vec.front() entry within my code I can use print vec.front(), and using nm main.o | grep front where I see the entry
0000000000401834 W _ZNSt6vectorIiSaIiEE5frontEv
Can someone explain me how I can keep all functions within my code without loosing them. I think, that dead functions do not get deleted as long as I don't set optimize settings or do following.
How to tell compiler to NOT optimize certain code away?
Why I need it: Current Python implementations use the internal STL implementation to print the content of a container, but it would be much more interesting to use functions which are defined by ISO/IEC 14882. I know it's possible to write a shared library, which can be compiled to your actual code before you debug it, to maintain that you have all STL functions, but who wants to compile an extra lib to its code, before debugging. It would also be interesting to know if there are some advantages and disadvantages of this two approaches (Shared Lib. and Python)?
What's exactly a dead function, isn't it a function which is available in my source code but isn't used?
There are two cases to consider:
int unused_function() { return 42; }
int main() { return 0; }
If you compile above program, the unused_function is dead -- never called. However, it would still be present in the final executable (even with optimization [1]).
Now consider this:
template <typename T> int unused_function(T*) { return 42; }
int main() { return 0; }
In this case, unused_function will not be present, even when you turn off all optimizations.
Why? Because the template is not a "real" function. It's a prototype, from which the compiler can create "real" functions (called "template instantiation") -- one for each type T. Since you've never used unused_function, the compiler didn't create any "real" instances of it.
You can request that the compiler explicitly instantiate all functions in a given class, with explicit instantiation request, like so:
#include <vector>
template class std::vector<int>;
int main() { return 0; }
Now, even though none of the vector functions are used, they are all instantiated into the final binary.
[1] If you are using the GNU ld (or gold), you could still get rid of unused_function in this case, by compiling with -ffunction-sections and linking with -Wl,--gc-sections.
Thanks for your answer. Just to repeat, template functions don't get initiated by the gcc, because they are prototypes. Only when the function is used or it gets explicitly initiated it will be available within my executable.
So what we have mentioned until yet is :
function definition int unusedFunc() { return 10; }
function prototype int protypeFunc(); (just to break it down)
What happens when you inline functions? I always thought, that the function will be inserted within my source code, but now I read, that compilers often decide what to do on their own. (Sounds strange, because their must be rule). It doesn't matter if you use the keyword inline, for example.
inline int inlineFunc() { return 10; }
A friend of mine also told me that he hasn't had access to addresses of functions, although he hasn't used inline. Are there any function types I forgot? He also told me that their should be differences within the object data format.
#edit - forgot:
nested functions
function pointers
overloaded functions

Where Is gcvt or gcvtf Defined in gcc Source Code?

I'm working on some old source code for an embedded system on an m68k target, and I'm seeing massive memory allocation requests sometimes when calling gcvtf to format a floating point number for display. I can probably work around this by writing my own substitute routine, but the nature of the error has me very curious, because it only occurs when the heap starts at or above a certain address, and it goes away if I hack the .ld linker script or remove any set of global variables (which are placed before the heap in my memory map) that add up to enough byte size so that the heap starts below the mysterious critical address.
So, I thought I'd look in the gcc source code for the compiler version I'm using (m68k-elf-gcc 3.3.2). I downloaded what appears to be the source for this version at http://gcc.petsads.us/releases/gcc-3.3.2/, but I can't find the definition for gcvt or gcvtf anywhere in there. When I search for it, grep only finds some documentation and .h references, but not the definition:
$ find | xargs grep gcvt
./gcc/doc/gcc.info: C library functions `ecvt', `fcvt' and `gcvt'. Given va
lid
./gcc/doc/trouble.texi:library functions #code{ecvt}, #code{fcvt} and #code{gcvt
}. Given valid
./gcc/sys-protos.h:extern char * gcvt(double, int, char *);
So, where is this function actually defined in the source code? Or did I download the entirely wrong thing?
I don't want to change this project to use the most recent gcc, due to project stability and testing considerations, and like I said, I can work around this by writing my own formatting routine, but this behavior is very confusing to me, and it will grind my brain if I don't find out why it's acting so weird.
Wallyk is correct that this is defined in the C library rather than the compiler. However, the GNU C library is (nearly always) only used with Linux compilers and distributions. Your compiler, being a "bare-metal" compiler, almost certainly uses the Newlib C library instead.
The main website for Newlib is here: http://sourceware.org/newlib/, and this particular function is defined in the newlib/libc/stdlib/efgcvt.c file. The sources have been quite stable for a long time, so (unless this is a result of a bug) chances are pretty good that the current sources are not too different from what your compiler is using.
As with the GNU C source, I don't see anything in there that would obviously cause this weirdness that you're seeing, but it's all eventually a bunch of wrappers around the basic sprintf routines.
It is in the GNU C library as glibc/misc/efgcvt.c. To save you some trouble, the code for the function is:
char *
__APPEND (FUNC_PREFIX, gcvt) (value, ndigit, buf)
FLOAT_TYPE value;
int ndigit;
char *buf;
{
sprintf (buf, "%.*" FLOAT_FMT_FLAG "g", MIN (ndigit, NDIGIT_MAX), value);
return buf;
}
The directions for obtain glibc are here.

Where is the source code for isnan?

Because of the layers of standards, the include files for c++ are a rats nest. I was trying to figure out what __isnan actually calls, and couldn't find anywhere with an actual definition.
So I just compiled with -S to see the assembly, and if I write:
#include <ieee754.h>
void f(double x) {
if (__isinf(x) ...
if (__isnan(x)) ...
}
Both of these routines are called. I would like to see the actual definition, and possibly refactor things like this to be inline, since it should be just a bit comparison, albeit one that is hard to achieve when the value is in a floating point register.
Anyway, whether or not it's a good idea, the question stands: WHERE is the source code for __isnan(x)?
Glibc has versions of the code in the sysdeps folder for each of the systems it supports. The one you’re looking for is in sysdeps/ieee754/dbl-64/s_isnan.c. I found this with git grep __isnan.
(While C++ headers include code for templates, functions from the C library will not, and you have to look inside glibc or whichever.)
Here, for the master head of glibc, for instance.

Resources