Following link, I'm wondering if there're some side effects of enabling C++0x in GCC.
According to gcc: "GCC's support for C++0x is experimental".
What I'm afraid of is that for example compiler will generate some code differently or standard library uses some C++0x feature which is broken in gcc.
So if I don't explicitly use any of C++0x features, may it break my existing code?
The C++0x support has been, and is under heavy development. One thing this means is that bugs get fixed quickly, another thing it means is that there might be small bugs present. I say small, because of two reasons:
libstdc++ has not been rewritten from scratch, so all the old elements are just as stable as it was before any of this c++0x was available, if not more stable, because of several years of bug fixes.
There's corner cases in the new/old Standard that haven't yet been ironed out. Are these the runtime quirks you talk about? No. C++0x support has been under development for 4 releases now, don't worry.
Most of the impact from that flag will be felt in the new language features, the library features like move constructors and std::thread (on posix platforms) don't affect code not using them.
Bottom line, experimental is too strict a word in daily production. The standard has changed in the three/four years GCC has been working on support. Old revisions of c++0x will be broken in a newer GCC, but that's a good thing. C++0x is finished as far as the non-paying-for-a-pdf-world is concerned, so no breaking changes should be added. Decide if you want the new stuff or not beforehand, because you won't be able to jsut switch it off once you've gotten used to using it.
Usually, it won't break your source code, but you may include (even without noticing it or knowing it) C++0x idioms that will compile because of these features enabled, but won't compile in a strict C++ compiler (for instance, in C++0x you can use >> as a template of template terminator, but not in C++, so if you forget to separate it by a space, you will have problems when you try to compile this code in a C++ compiler).
R-Value references/moves is a feature which can have a huge impact on how the compiler does optimizations and such. Even if you don't use move in your own code, the STD includes will automatically switch to their new versions which include move ctors/assignment.
There are some circumstances which allow the compiler to create move constructors/assignment operators implicitly for user defined classes. These rules changed a few times during the standardization process (I don't even know what the current rules are). So, depending on the exact version of your compiler it could be using a set of rules for generating these implicit functions that isn't even in the latest version of the standard.
Most of the other major C++0x changes don't have big run-time impact, they are mostly compile time (constexpr, string literals, varadic templates) or syntax helpers (foreach, auto, initializer lists).
I originally wrote the question you link because of (in my opinion) a very big issue as described here. Basically, overloading a function with shared_ptr to a const type was not recognized by the compiler. That's a huge flaw in my opinion. It's been fixed from GCC 4.5 to GCC 4.6, but it serves as an example of a big bug that's still around in the default installation of GCC in ubuntu, for example. So while bugs are fixed quickly, there still might be bugs, and you may waste a weekend looking for the source and solution of those bugs.
My recommendation, based on this personal experience, is to avoid C++0x until the word "experimental" is removed from the description of GCC's support of C++0x or until you actually need any of the C++0x features to a degree that an alternative implementation would significantly sacrifice good design.
Related
Whilst reading through a variety of articles about LLVM and its own documentation I've seen references to a property I find strange concerning the backwards compatibility of its IR.
Much of the documentation concerning the IR mentions that it is unstable and can break at pretty much any time. However, it also often mentioned that the bitcode IR is more backwards compatible (as in 'often valid across more versions') than the text IR for a given specific LLVM version.
My understanding is that the bytecode -> bitcode transformation is pretty much a direct mapping. Knowing this, why/how is it that the text IR is less compatible? I can't seem to find documentation on the actual mechanism that drives this behavior.
One example of such a statement about IR compatibility can be found here: http://llvm.org/docs/DeveloperPolicy.html#ir-backwards-compatibility
Speaking as a developer and not one that developed LLVM.
As you would imagine, the bitcode is more like a virtual machine that has a specific instruction set up to 64 bit that, until quantum computer instructions are popular, will rarely change. As new 64 bit chips are delivered with incremental enhancements it is not major surgery to 'append' new bitcode handling.
The IR, on the other hand, is a textual representation that will go through one or more passes to get to either bitcode or machine code. To support new capabilities there is obviously a net 'add' (new instructions) as well as modifications to existing. This will not only impact the IR parsers/emitters but potentially many intermediate/ephemeral data models used between them. Clearly this would also impact the C/C++ LLVM API. Very costly propositions to maintain backwards compatibility indeed.
I think the LLVM developers chose to promise more compatibility for bitcode because there are more obvious use cases that benefit from having backwards compatible bitcode than there are for the textual representation.
You may for example store and distribute libraries as LLVM bitcode and load them into some kind of just-in-time compiling interpreter for a user language. It could be inconvenient to have to upgrade the bitcode whenever the interpreter is upgraded. Bitcode is a machine interface, that is more useful with well defined compatibility.
Textual LLVM IR is more often used for debugging, documentation, or internal testing of LLVM, where changing any failing code whenever (infrequent) changes to the representation are made is more practical. It seems less likely that intermediate code is stored in the long term or distributed widely.
Since maintaining backwards compatibility can negatively impact both implementation effort (see e.g. a C++ parser) and the future evolution of a language (think of the struggles of modern programming languages to deal with retrospectively unwise decisions of the past),
the LLVM community chose to only do so when there are clear benefits. I do not think there is a technical reason inherent in the formats, since both formats have to be parsed and handled correctly.
I want to test GCC/clang and I want to focus on parts that most computations/optimizations happens there. What are those files?
You probably won't find any blatant hot spot in the GCC compiler (there have been some GSOC project around that idea a few years ago), at least when you ask it to optimize.
You could use the -ftime-report & -fmem-report options to gcc (in addition of optimizing options like -O2) to find out more which (compiler optimization) passes is using time. For most workloads, you won't find any blatant passes eating a lot more resources than others.
I guess it is the same in Clang. Compilers are very complex software, and there is no easy hot-spot to optimize inside them (otherwise, people within the compiler community would have found them).
BTW, recent GCC have plugin hooks, which enable you to code your GCC plugin (in C++), or your GCC extension (in MELT) to find more.
I think I finally know what I want in a compiled programming language, a fast compiler. I get the feeling that this is a really superficial thing to care about but some time after switching from Java to Scala for a while I realized that being able to make a small change in code and immediately run the program is actually quite important to me. Besides Java and Go I don't know of any languages that really value compile speed.
Delphi/Object Pascal. Make a change, press F9 and it runs - you don't even notice the compile time. A full rebuild of a fairly substantial project that we run takes of the order of 10-20 seconds, even on a fairly wimpy machine
There's an open source variant available at www.freepascal.org. I've not messed with it but it reportedly is just as fast - it's the design of the Pascal language that allows this.
Java isn't fast for compiling. The feature you a looking for is probably a hot replacement/redeployment while coding. Eclipse recompiles just the files you changed.
You could try some interpreted languages. They usually don't require compiling at all.
I wouldn't choose a language based on compilation speed...
Java is not the fastest compiler out there.
Pascal (and its close relatives) is designed to be fast - it can be compiled in a single pass. Objective Caml is known for its compilation speed (and there is a REPL too).
On the other hand, what you really need is REPL, not a fast recompilation and re-linking of everything. So you may want to try a language which supports an incremental compilation. Clojure fits well (and it is built on top of the same JVM you're used to). Common Lisp is another option.
I'd like to add that there official compilers for languages and unofficial ones made by different people. Obviously because of this the performance changes per compiler.
If you were to talk just about the official compiler I'd say it's probably Fortran. It's very old but it's still used in most science and engineering projects because it is one of the fastest languages. C and C++ come probably tied in second because there also used in science and engineering.
Compiling a program to bytecode instead of native code enables a certain level of portability, so long a fitting Virtual Machine exists.
But I'm kinda wondering, why delay the compilation? Why not simply compile the byte code when installing an application?
And if that is done, why not adopt it to languages that directly compile to native code? Compile them to an intermediate format, distribute a "JIT" compiler with the installer and compile it on the target machine.
The only thing I can think of is runtime optimization. That's about the only major thing that can't be done at installation time. Thoughts?
Often it is precompiled. Consider, for example, precompiling .NET code with NGEN.
One reason for not precompiling everything would be extensibility. Consider those languages which allow use of reflection to load additional code at runtime.
Some JIT Compilers (Java HotSpot, for example) use type feedback based inlining. They track which types are actually used in the program, and inline function calls based on the assumption that what they saw earlier is what they will see later. In order for this to work, they need to run the program through a number of iterations of its "hot loop" in order to know what types are used.
This optimization is totally unavailable at install time.
The bytecode has been compiled just as well as the C++ code has been compiled.
Also the JIT compiler, i.e. .NET and the Java runtimes are massive and dynamic; And you can't foresee in a program which parts the apps use so you need the entire runtime.
Also one has to realize that a language targeted to a virtual machine has very different design goals than a language targeted to bare metal.
Take C++ vs. Java.
C++ wouldn't work on a VM, In particular a lot of the C++ language design is geared towards RAII.
Java wouldn't work on bare metal for so many reasons. primitive types for one.
EDIT: As delnan points out correctly; JIT and similar technologies, though hugely benificial to bytecode performance, would likely not be available at install time. Also compiling for a VM is very different from compiling to native code.
Note: I know very little about the GCC toolchain, so this question may not make much sense.
Since GCC includes an Ada front end, and it can emit ARM, and devKitPro is based on GCC, is it possible to use Ada instead of C/C++ for writing code on the DS?
Edit: It seems that the target that devKitARM uses is arm-eabi.
devkitPro is not a toolchain, compiler or indeed any software package. The toolchain used to target the DS is devkitARM, one of the toolchains provided by devkitPro.
It may be possible to build the ada compiler but I doubt very much if you'll ever manage to get anything useful running on the DS itself. devkitPro will certainly never provide an ada compiler as part of the packages we produce.
Yes it is possible, see my project https://github.com/Lucretia/tamp and build the cross compiler as per my script. You would then be able to target NDS using Ada. I have build a basic RTS as well which will provide you with local exception handling.
And #Martin Beckett, why do think Ada is aimed squarely at DoD stuff? They dropped the mandate years ago and Ada is easily usable for any project, you do realise that Ada is a general purpose programming language don't you?
(Disclaimer: I don't know Ada)
Possibly.
You might be able to build devKitPro to use Ada, however, the pre-provided binaries (at least for OS X) do not have Ada support compiled in.
However, you will probably find yourself writing tons of C "glue" code to interface with the various hardware registers and the like.
One thing to consider when porting a language to the nintendo DS is the relatively small stack it has (16KB). There are possible workarounds such as swapping the SRAM stack content into DRAM (4MB) when stack gets full or just have the whole stack in DRAM (assumed to be auwfully slow).
And I second Dre on the fact that you'll have to provide yourself glue between the Ada library function you'd like to use and existing libraries on the DS (which are hopefully covering most of the hardware stuff).
On a practical plane, it is not possible.
On a theoretical plane, you could use one custom Ada parser (I found this one on the ANTLR site, but it is quite old) in order to translate Ada to C/C++, and then feed that to devkitpro.
However, the effort of building such translator is probably going to be equal (if not higher) to creating the game itself.