Is anyone aware of any performance implications from changing the _win32_winnt from 0x400 to 0x0501?
I am compiling C++ on VS2005.
My Application is very communications oriented, doing quite a lot of Winsock work.
A value of 0x0400 targets _WIN32_WINNT_NT4, which is a smaller subset of the windows SDk that targets Windows 2000. That means you are excluding, ignoring and throwing away a lot of code that would have been compiled into your executable. So yes it will execute faster.
So when you define 0x0501 you are saying, yes give me all that rich extra goodness that the header files for Windows XP. However your application most likely will not run on windows 2000 due to failed imports. Since you are bringing in all that extra fatness, your compile times will be slower, and your code will be bigger, your executable will be bigger, and most likely it will be slower.
You can find more information here on these topics:
http://blogs.msdn.com/b/oldnewthing/archive/2007/04/11/2079137.aspx
http://msdn.microsoft.com/en-us/library/aa383745.aspx
I'm unaware of any particular performance-related issues, but if you could provide more specific details about how your performance is suffering, someone might be able to help.
For example, is your network throughput lower than you think it used to be?
Related
In a tutorial I've encountered a new concept (for me), that I never thought is possible. Actually, I thought that compilation is an entirely pre-run-time process. This is the phrase from tutorial: "Compile time occurs before link time (when the output of one or more compiled files are joined together) and runtime (when a program is executed). In some programming languages it may be necessary for some compilation and linking to occur at runtime".
My questions are:
Is pre-run-time compilation and linking processes absolutely different from run-time compilation and linking? If yes, please explain the main differences.
How are code sections that need to be compiled(linked) during run-time marked and where is that information kept? (This may be different from language to language, if possible, please give a specific example).
Thank you very much for your time!
Runtime compilation
The best (most well known) example I'm personally aware of is the just in time compilation used by Java. As you might know Java code is being compiled into bytecode which can be interpreted by the Java Virtual Machine. It's therefore different from let's say C++ which is first fully (preprocessed) compiled (and linked) into an executable which can be ran directly by the OS without any virtual machine.
The Java bytecode is instead interpreted by the VM, which maps them to processor specific instructions. That being said the JVM does JIT, which takes that bytecode and compiles it (during runtime) into machine code. Here we arrive at your second question. Even in Java it can depend on which JVM you are using but basically there are pieces of code called hotspots, the pieces of code that are run frequently and which might be compiled so the application's performance improves. This is done during runtime because the normal compiler does not have (or well might not have) all the necessary data to make a proper judgement which pieces of code are in fact ran frequently. Therefore JIT requires some kind of runtime statistics gathering, which is done parallel to the program execution and is done by the JVM. What kind of statistics are gathered, what can be optimised (compiled in runtime) etc. depends on the implementation (you obviously cannot do everything a normal compiler would do due to memory and time constraints - guess this partly answers the first question? you don't compile everything and usually only a limited set of optimisations are supported in runtime compilation). You can try looking for such info but from my experience usually it's very badly documented and hard to find (at least when it comes to official sources, not presentations/blogs etc.)
Runtime linking
Linker is a different pair of shoes. We cannot use the Java example anymore since it doesn't really have a linker like C or C++ (instead it has a classloader which takes care of loading files and putting it all together).
Usually linking is performed by a linker after the compilation step (static linking), this has pros (no dependencies) and cons (higher memory imprint as we cannot use a shared library, when the library number changes you need to recompile your sources).
Runtime linking (dynamic/late linking) is actually performed by the OS and it's the OS linker's job to first load shared libraries and then attach them to a running process. Furthermore there are also different types of dynamic linking: explicit and implicit. This has the benefit of not having to recompile the source when the version number changes since it's dynamic and library sharing but also drawbacks, what if you have different programs that use the same library but require different versions (look for DLL hell). So yes those two concepts are also quite different.
Again how it's all done, how it's decided what and how should be linked, is OS specific, for instance Microsoft has the dynamic-link library concept.
I want to install a driver for Ros (robot operating system), and I have two options the binary install and the compile and install from source. I would like to know which installation is better, and what are the advantages and disadvantages of each one.
Source: AKA sourcecode, usually in some sort of tarball or zip file. This is RAW programming language code. You need some sort of compiler (javac for java, gcc for c++, etc.) to create the executable that your computer then runs.
Advantages:
You can see what the source code is which means....
You can edit the end result program to behave differently
Depending on what you're doing, when you compile, you could enable certain optimizations that will work on your machine and ONLY your machine (or one EXACTLY like it). For instance, for some sort of gfx rendering software, you could compile it to enable GPU support, which would increase the rendering speed.
You can create a version of an application for a different OS/Chipset (see Binary below)
Disadvantages:
You have to have your compiler installed
You need to manually install all required libraries, which frequently also need to be compiled (and THEIR libraries need to be installed, etc.) This can easily turn a quick 30-second command into a multi-hour project.
There are any number of things that could go wrong, and if you're not familiar with what the various errors mean, finding support online could be quite difficult.
Binary: This is the actual program that runs. This is the executable that gets created when you compile from source. They typically have all necessary libraries built into them, or install/deploy them as necessary (depending on how the application was written).
Advantages:
It's ready-to-run. If you have a binary designed for your processor and operating system, then chances are you can run the program and everything will work the first time.
Less configuration. You don't have to set up a whole bunch of configuration options to use the program; it just uses a generic default configuration.
If something goes wrong, it should be a little easier to find help online, since the binary is pre-compiled....other people may be using it, which means you are using the EXACT same program as them, not one optimized for your system.
Disadvantages:
You can't see/edit the source code, so you can't get optimizations, or tweak it for your specific application. Additionally, you don't really know what the program is going to do, so there could be nasty surprises waiting for you (this is why Antivirus is useful....although LESS necessary on a linux system).
Your system must be compatible with the Binary. For instance, you can't run a 64-bit application on a 32-bit operating system. You can't run an Intel binary for OS X on an older PowerPC-based G5 Mac.
In summary, which one is "better" is up to you. Only you can decide which one will be necessary for whatever it is you're trying to do. In most cases, using the binary is going to be just fine, and give you the least trouble. Sometimes, though, it is nice to have the source available, if only as documentation.
Compiling a program to bytecode instead of native code enables a certain level of portability, so long a fitting Virtual Machine exists.
But I'm kinda wondering, why delay the compilation? Why not simply compile the byte code when installing an application?
And if that is done, why not adopt it to languages that directly compile to native code? Compile them to an intermediate format, distribute a "JIT" compiler with the installer and compile it on the target machine.
The only thing I can think of is runtime optimization. That's about the only major thing that can't be done at installation time. Thoughts?
Often it is precompiled. Consider, for example, precompiling .NET code with NGEN.
One reason for not precompiling everything would be extensibility. Consider those languages which allow use of reflection to load additional code at runtime.
Some JIT Compilers (Java HotSpot, for example) use type feedback based inlining. They track which types are actually used in the program, and inline function calls based on the assumption that what they saw earlier is what they will see later. In order for this to work, they need to run the program through a number of iterations of its "hot loop" in order to know what types are used.
This optimization is totally unavailable at install time.
The bytecode has been compiled just as well as the C++ code has been compiled.
Also the JIT compiler, i.e. .NET and the Java runtimes are massive and dynamic; And you can't foresee in a program which parts the apps use so you need the entire runtime.
Also one has to realize that a language targeted to a virtual machine has very different design goals than a language targeted to bare metal.
Take C++ vs. Java.
C++ wouldn't work on a VM, In particular a lot of the C++ language design is geared towards RAII.
Java wouldn't work on bare metal for so many reasons. primitive types for one.
EDIT: As delnan points out correctly; JIT and similar technologies, though hugely benificial to bytecode performance, would likely not be available at install time. Also compiling for a VM is very different from compiling to native code.
A VS solution consisting of C# and C++ projects built with VS2005 outperforms the same solution converted to VS2008 (release mode). I already double checked the optimization settings for the known bug where the settings are not converted correctly.
While the difference in performance is not big it is still notable. Any ideas what the reason could be for the difference in performance?
Thanks in advance for any replies!
Can’t answer completely without knowing what the code is and what switches are being sent to the compiler.
C# performance shouldn't have changed just by recompiling with a different version of VS. If you have both on the same machine, they'll use the same version of the .NET Framework to execute.
As far as C++ goes, the compiler changes between VS versions so perf won’t always be the same. It’s very possible they made a change to the optimizer that happens to perform worse on your code but better for most others. They could also have adjusted their compiler's instruction scheduler to account for a more modern "average" CPU. VS2008 also brought in a lot of C++ compliance fixes – there might be one that reduced the room the compiler has to optimize.
I have an application which has many services and one UI module. All these are developed in VC++ 6.0. The total KLOC would be 560 KLOC.
It uses Mutltithreading,MFC and all datatypes like word,int, long.
Now we need to support 64bit OS. What would be the changes we would need to make to the product.
By support i mean both like running the application on a 64bit OS and also making use of the 64bit memory.
Edit: I am ruling out migration to VS2005 or anything higher than VC6.0 due to time constraints.
So what changes need to be done.
64bit Windows includes 32bit via WOW. Any 32bit application should just continue to work.
(It is only drivers that have to match the bitness of the OS.)
[Note to commenters: plugins—of whatever type—are not separate applications but dlls used by other applications which do need to match the host. In that case you also get the same problem where 64bit extensions are incompatible with 32bit hosts.]
As Richard says, the 32-bit version should continue to work unless you've got a driver or a shell extension or something.
However if you do need to upgrade the code you're going to have to upgrade the compiler too: I don't think MFC got good 64-bit support until VS2005 or later. I'd suggest you get the 32-bit code building in VS2010 - this will not be trivial - and then start thinking about converting it to 64-bit. You can of course leave the production 32-bit builds in VC6 but then you add maintainership burden.
You'll probably get most of the way converting by flipping the compiler to 64-bit and turning on full warnings - particularly given the size of your code it may be impractical to review it all. One thing to watch out for is storing pointers in ints, dwords, etc. which may now be too short to hold the pointer - you need DWORD_PTR etc. now - but I think the warnings do catch that.
Or if this is in many components then you might get away with only migrating a few components to 64-bit. Then, unfortunately, you've got data length issues for communication between the two versions.
You must convert to a newer compiler. Time constraints are pretty much irrelevant. The VC6 compiler simply cannot generate 64 bits code. Every pointer it generates is 32 bits, for starters. If you need to access "64 bit memory", i.e. memory above 0x00000000FFFFFFFF, then 32 bits is simply not enough.
If you're ruling out changing your IDE to one that intrinsically supports 64-bit compiling and debugging, you're making your job unnecessarily more complex. Are you sure it's not worth the hit?
Just for running on a 64bit OS, you won't need to make any changes. That's what WOW64 is for.
If, however, you want to run 64bit natively (i.e., access the 64bit memory space) you will have to compile as 64bit. That means using an IDE that supports 64bit. There is no way around this.
Most programs should have no problem converting to 64bit if they are written with decent coding standards (mainly, no assumptions about the size of a pointer, like int-pointer conversions). You'll get a lot of warnings about things like std::size_t conversions, but they will be fairly meaningless.