How is is_standard_layout implemented? - c++11

In general one can implement typical type_traits using template techniques.
However I didn't imagine how std::is_standard_layout could be implemented in these terms. http://en.cppreference.com/w/cpp/types/is_standard_layout
When I checked the gcc standard library, I found that it is implemented in terms of __is_standard_layout(T) which I could not find defined anywhere else. Is this a compiler magic function?
Would it be possible to implement std::is_standard_layout explicitly?
For example one of the conditions is that it inherits from a single class.
That seems to be impossible to determine at compile time.

No, std::is_standard_layout is not something you can implement without compiler intrinsics. As you've correctly pointed out, it needs more information than the C++ type system can express.

Related

Including ComplexExpr and ComplexFunc classes in Halide API

The ComplexExpr and ComplexFunc classes in the links below seem very convenient to work with complex numbers. Is there a plan to include them into the official Halide API? Or is there a reason why they are not included?
https://github.com/halide/Halide/blob/master/apps/fft/complex.h
https://github.com/halide/Halide/blob/be1269b15f4ba8b83df5fa0ef1ae507017fe1a69/apps/fft/funct.h
Speaking as a Halide developer...
Or is there a reason why they are not included?
We haven't included these historically since we didn't want to bless a particular representation for complex numbers. There are a few valid ways of dealing with them and the headers in question are just one.
Is there a plan to include them into the official Halide API?
We've started talking about packaging some of this type of code into a set of header-only "Halide tools" libraries, so named to avoid the normative implication of calling it something like "stdlib". So as of right now, there is no concrete plan, but the odds are nonzero.
In the meantime, the code is MIT licensed, so you should feel free to use those files, regardless.

Will go compilers ignore unused functions

If there is a function from an external package that is not used at all in my project, will the compiler remove the function from the generated machine code?
This question could be targeted at any language compiler in general. But, I think the behaviour may vary language to language. So, I am interested in knowing what does go compilers do.
I would appreciate any help on understanding this.
The language spec does not mention this anywhere, and from a correctness point of view this is irrelevant.
But know that the current version does remove certain constructs that the compiler can prove is not used and will not change the runtime behaviour of the app.
Quoting from The Go Blog: Smaller Go 1.7 binaries:
The second change is method pruning. Until 1.6, all methods on all used types were kept, even if some of the methods were never called. This is because they might be called through an interface, or called dynamically using the reflect package. Now the compiler discards any unexported methods that do not match an interface. Similarly the linker can discard other exported methods, those that are only accessible through reflection, if the corresponding reflection features are not used anywhere in the program. That change shrinks binaries by 5–20%.
Methods are a "harder" case than functions because methods can be listed and called with reflection (unlike functions), but the Go tools do what they can even to remove unused methods too.
You can see examples and proof of removed / unlinked code in this answer:
How to remove unused code at compile time?
Also see other relevant questions:
Splitting client/server code
Call all functions with special prefix or suffix in Golang

Java Bytecode manipulation libraries

I am starting to work on a project and for one of the tasks I need to analyze the source code in order to gather information about the classes and their methods. More specifically, for each method I need to know which internal attributes and external objects (references) it uses throughout the entire method body.
I discussed it with my supervisors and they think that Bytecode manipulation libraries is the way to go. I already looked at BCEL, ASM and Javassist but I'm not sure which one I need to use. Do they all provide access to the method body where I can see all the instructions and get the information I need?
Any advice would be appreciate it. Thank you!
If you really “need to analyze the source code”, then libraries which allow to inspect the bytecode are not the way to go.
Otherwise, you really need to define your task precisely. Either, you are about to analyze classes, regardless of whether you will look at their source code or byte code, or you want to analyze source code and consider doing it by compiling first, followed by analyzing the compiled result. In the latter case, you have to compare the effort of both steps with alternative solution, which may, e.g. incorporate direct source code analysis.
Parsing byte code is rather easy, easier than analyzing source code, which is the reason why bytecode is produced prior to the execution of Java programs. To answer your concrete question, yes, all three libraries offer you a way to analyze the instructions and associated information. Which one is the best to fit your needs, is a question that is beyond the scope of Stackoverflow.
Whether analyzing the byte code helps, depends on your exact requirements. When it comes to field and method access, you may precisely get most of them using that approach. Only inlined compile-time constants lack their origins. When it comes to type use, you have to consider that not every source code artifact has an existing counterpart in the byte code, e.g. widening casts produce no actual code and and local variables usually don’t have a declared type (debugging information aside), but only an implied type which depends on how they are actually used. They also have no information about Generics, unless debugging information has been included.

How to use Aspectc++ with C++v11?

I want to use the aspectc++ compiler for a C++11-project. I have read in the manual, that c++11 support will come with version 2. I thought that aspect weaving happens only on the code level, so why does it depend on the used C++ version? Why does aspectc++ care the source code when it just has to weave the aspects to generate a composed piece of code? Is there a way to use aspectc++ for C++11 source code? Or is there an alternative which can handle it?
This post is already a bit older, i know.
Nevertheless I'd like to answer the question why aspectC++ depends on the C++-version:
aspectC++ internally parses the code (amongst other things to identify the locations where to weave the code). Not all of this can be done by external parsers therefore it needs to understand the syntax basically itself.
Some new c++-constructions from C++11 like attributes ([[...]]) could not be handeled by the AspectC++-compiler version < 2.0.
To use c++11 for compiling just use -std=c++11

GCC technical details

I don't know if this is the right place for things like this, but I am curious about a few aspects of the GCC front-end/back-end architecture:
I know I can compile .o files from C code and link them to C++ code, and I think I can do it the other way round, too. Does this work because the two languages are similar, or because the GCC back-end is really language-independent? Would this work with ADA code too? (I don't even know if that makes sense, since I don't know ADA or if it even has "functions", but the question is understood. If it makes no sense, think "Pascal" or even "my own custom language front-end")
Where would garbage-collection be implemented? For example, a Java front-end. The way I understand, if compiling to a JVM back-end, the "platform" will take care of the GC, and so the front-end needs not do anything about it, but if compiling to native code, would the front-end send garbage-collecting GENERIC code to the back-end, or does it turn on some flag telling the back-end to produce garbage-collecting code? The first makes more sense to me, but that would mean the front-end produces different output based on the target, which seems to miss the point of the GCC's front-end/back-end architecture.
Where would language-specific libraries go? For instance, the standard Java classes or standard C headers. If they are linked in at the end, then could a C program theoretically call functions from the Java library or something like that, since it is just another linked library?
Yes, the backend is at least reasonably language independent. Yes, it works with Ada.
GCJ generates native code which uses a runtime library. The garbage collector is part of the runtime library.
GCJ implements the CNI, which allows you to write code in C++ that can be used as native methods by Java code -- but being able to do this is a consequence of them having designed it in, not just an accidental byproduct of using the same back-end.
It is possible because calling convention is compatible, but name mangling is different (no mangling in C). To call C function from C++ you should declare it with extern "C". And to call C++ function from C you should declare it with mangled name (and may be with additional or different type args). The calling Fortran code is possible in some cases too, but argument passing convention is different (pass by ref in Fortran).
There were actually a converters from C++ to C (cfront) and from fortran to c (f2c) and some solutions from them are still used.
garbage-collection is implemented in run-time library, e.g. boehm. Backend should generate objects compatible with selected GC library.
Compiler driver (g++, gfortran, ..) will add language-specific libraries to linking step.

Resources