VS2005 binary performs better than VS2008 binary - performance

A VS solution consisting of C# and C++ projects built with VS2005 outperforms the same solution converted to VS2008 (release mode). I already double checked the optimization settings for the known bug where the settings are not converted correctly.
While the difference in performance is not big it is still notable. Any ideas what the reason could be for the difference in performance?
Thanks in advance for any replies!

Can’t answer completely without knowing what the code is and what switches are being sent to the compiler.
C# performance shouldn't have changed just by recompiling with a different version of VS. If you have both on the same machine, they'll use the same version of the .NET Framework to execute.
As far as C++ goes, the compiler changes between VS versions so perf won’t always be the same. It’s very possible they made a change to the optimizer that happens to perform worse on your code but better for most others. They could also have adjusted their compiler's instruction scheduler to account for a more modern "average" CPU. VS2008 also brought in a lot of C++ compliance fixes – there might be one that reduced the room the compiler has to optimize.

Related

Which is better way of profiling using VTUNE: standalone or integrated with MSVC

I am getting certain errors while running VTUNE stand alone, but every thing works fine if I run it from MSVC IDE.
Will there be any reporting inaccuracy if I run VTUNE from inside the MSVC?
Which VTune version do you use? In general MSVS-integrated version of VTune provides exactly the same functionality as "Standalone". And of course there is no difference in accuracy at all.
MSVS vs. "Standalone" choice is a matter of your application and working style. In case you use MSVS for development purposes (and thus you have solution and sources integrated), using MSVS integrated version should be more convinient in terms of "more automation" for source viewpoints. At the same time some people prefer to use profiler in standalone manner, because their solutions are already overloading IDE process, however it's a rare case even for big legacy industry codes.
Side note: across other Parallel Studio tools (VTune, Inspector and Advisor XE, as well as Composer=Compiler+Libraries) - you may really find features enabled in different ways in MSVS and Linux/Standalone. Examples: 1. Inspector debugger integration with cl vs. gcc/gdb 2. Advisor Annotation Wizard vs. Assistance window. However VTune doesn't have even those small variations between hosts.

What are some compiled programming languages that compile fast?

I think I finally know what I want in a compiled programming language, a fast compiler. I get the feeling that this is a really superficial thing to care about but some time after switching from Java to Scala for a while I realized that being able to make a small change in code and immediately run the program is actually quite important to me. Besides Java and Go I don't know of any languages that really value compile speed.
Delphi/Object Pascal. Make a change, press F9 and it runs - you don't even notice the compile time. A full rebuild of a fairly substantial project that we run takes of the order of 10-20 seconds, even on a fairly wimpy machine
There's an open source variant available at www.freepascal.org. I've not messed with it but it reportedly is just as fast - it's the design of the Pascal language that allows this.
Java isn't fast for compiling. The feature you a looking for is probably a hot replacement/redeployment while coding. Eclipse recompiles just the files you changed.
You could try some interpreted languages. They usually don't require compiling at all.
I wouldn't choose a language based on compilation speed...
Java is not the fastest compiler out there.
Pascal (and its close relatives) is designed to be fast - it can be compiled in a single pass. Objective Caml is known for its compilation speed (and there is a REPL too).
On the other hand, what you really need is REPL, not a fast recompilation and re-linking of everything. So you may want to try a language which supports an incremental compilation. Clojure fits well (and it is built on top of the same JVM you're used to). Common Lisp is another option.
I'd like to add that there official compilers for languages and unofficial ones made by different people. Obviously because of this the performance changes per compiler.
If you were to talk just about the official compiler I'd say it's probably Fortran. It's very old but it's still used in most science and engineering projects because it is one of the fastest languages. C and C++ come probably tied in second because there also used in science and engineering.

How to restore the default settings for the Visual Studio 2010 Configuration Manager

After some "games" with the Visual Studio Configuration Manager I found that every new C#/VB.NET project I create in my Visual Studio only has the 'x86' solution platform. It should be 'Any CPU', and I should be able to choose x86 or x64 if required. How do I reset these settings for all new projects?
Microsoft changed the default for EXE's to x86. Check out this post by Rick Byers:
http://blogs.msdn.com/b/rmbyers/archive/2009/06/09/anycpu-exes-are-usually-more-trouble-then-they-re-worth.aspx
Here's the list I've been using in our discussions to justify making
x86 the default for EXE projects in Visual Studio:
Running in two very different modes increases product complexity and the cost of testing * Often people don't realize the
implications on native-interop of architecture-neutral assemblies. It
means you need to ensure that equivalent 32-bit and 64-bit versions of
the native DLLs you depend on are available, and (most significantly)
the appropriate one is selected automatically. This is fairly easy
when calling OS APIs due to the OS re-mapping of c:\windows\system32
to c:\windows\syswow64 when running in the WOW and extensive testing
the OS team does. But many people who ship native DLLs alongside
their managed app get this wrong at first and are surprised then their
application blows up on 64-bit systems with an exception about their
32-bit DLL being in a bad format. Also, although it's much rarer than
for native-code, pointer-size bugs can still manifest in .NET (eg.
assuming IntPtr is the same as Int32, or incorrect marshalling
declarations when interopping with native code). * Also, in addition
to the rules you need to know to follow, there's just the issue that
you've now really got twice as much code to test. Eg., there could
easily be (and certainly have been many) CLR bugs that reproduce only
on one architecture of the CLR, and this applies all the way across
the stack (from OS, framework, 3rd-party libraries, to your code). Of
course in an ideal world everyone has done a great job testing both
32-bit and 64-bit and you won't see any differences, but in practice
for any large application that tends not to be the case, and (at
Microsoft at least) we end up duplicating our entire test system for
32 and 64-bit and paying a significant ongoing cost to testing and
supporting all platforms. * [Edit: Rico - of CLR and VS performance
architect fame - just posted a great blog entry on why Visual Studio
will not be a pure 64-bit application anytmie soon]
32-bit tends to be faster anyway * When an application can run fine either in 32-bit or 64-bit mode, the 32-bit mode tends to be a
little faster. Larger pointers means more memory and cache
consumption, and the number of bytes of CPU cache available is the
same for both 32-bit and 64-bit processes. Of course the WOW layer
does add some overhead, but the performance numbers I've seen indicate
that in most real-world scenarios running in the WOW is faster than
running as a native 64-bit process
Some features aren't avaiable in 64-bit * Although we all want to have perfect parity between 32-bit and 64-bit, the reality is that
we're not quite there yet. CLR v2 only supported mixed-mode debugging
on x86, and although we've finally added x64 support in CLR V4,
edit-and-continue still doesn't support x64. On the CLR team, we
consider x64 to be a first-class citizen whenever we add new
functionality, but the reality is that we've got a complicated
code-base (eg. completely separate 32-bit and 64-bit JIT compilers)
and we sometimes have to make trade-offs (for example, adding 64-bit
EnC would have been a very significant cost to the JIT team, and we
decided that their time was better spent on higher priority features).
There are other cool features outside of the CLR that are also
specific to x86 - like historical debugging in VS 2010. Complicating
matters here is that we haven't always done a great job with the error
messages, and so sometimes people "upgrade" to a 64-bit OS and are
then disgusted to see that some of their features no longer appear to
work (without realizing that if they just re-targetted the WOW they'd
work fine). For example, the EnC error in VS isn't very clear
("Changes to 64-bit applications are not allowed"), and has lead to
some confusion in practice. I believe we're doing the right thing in
VS2010 and fixing that dialog to make it clear that switching your
project to x86 can resolve the issue, but still there's no getting
back the time people have wasted on errors like this.
Are you sure this is the problem? Maybe you should just set "Show Advanced Build Configurations" as detailed here.

Binary Decision Diagram library for windows

After trying to get jinc compiled under windows and quickly running into hundreds of compiler errors I'm looking for a quality BDD library that will build for windows. Preferably in C or C++ but as long as I can bind to it I'm happy.
I recently wrestled with installing the CUDD v2.4.2 in a Windows / Visual Studio environment.
There is documentation out there, but in my opinion none of it gives the complete picture of how to install the thing and get it working in non-Unix environments. For example, how to address the issues with the Makefile, how to link to the *.a C archive files in your project, minor issues with the cpu_stats.c file, etc. This is a shame because CUDD seems to be quite a powerful means of reducing complexity for many problems, such as integer programming.
I recently managed to get it going in VS 2010. My blog details here.
Cudd is is good : http://vlsi.colorado.edu/~fabio/CUDD/ I have compiled it in Visual Studio 2005.
There seems to exist pre compiled binaries : http://web.cecs.pdx.edu/~alanmi/research/soft/softPorts.htm
As an ex researcher, I can tell you that two years ago, Cudd was the best in class regarding efficiency.
Biddy is becoming better and better.. http://biddy.meolic.com/
OK, this is a subjective claim, because I am the main author of Biddy. However, while Biddy does not have so many functions and it does not
have so robust and improved memory management and it has not
been tested in so many projects as CUDD, it is a viable library.
By using it, you can help to improve it. My group is active and flexible
and we can implement any function you need.

What compilers besides gcc can vectorize code?

GCC can vectorize loops automatically when certain options are specified and given the right conditions. Are there other compilers widely available that can do the same?
ICC
llvm can also do it and vector pascal too and one that is not free VectorC. These are just some I remember.
Also PGI's compilers.
The Mono project, the Open Source alternative to Microsoft's Silverlight project, has added objects that use SIMD instructions. While not a compiler, the Mono CLR is the first managed code system to generate vector operations natively.
IBM's xlc can auto-vectorize C and C++ to some extent as well.
Actually, in many cases GCC used to be quite worse than ICC for automatic code vectorization, I don't know if it recently improved enough, but I doubt it.
VectorC can do this too. You can also specify all target CPU so that it takes advantage of different instruction sets (e.g. MMX, SIMD, SIMD2,...)
Visual C++ (I'm using VS2005) can be forced to use SSE instructions. It seems not to be as good as Intel's compiler, but if someone already uses VC++, there's no reason not to turn this option on.
Go to project's properties, Configuration properties, C/C++, Code Generation: Enable Enhanced Instruction Set. Set "Streaming SIMD Instructios" or "Streaming SIMD Instructios 2". You will have to set floating point model to fast. Some other options will have to be changed too, but compiler will tell you about that.
Even though this is an old thread, I though I'd add to this list - Visual Studio 11 will also have auto vectorisation.

Resources