How to Benchmark VS2010 Installations - visual-studio-2010

I'm doing a VS2010 Installation on a VM Machine and 1 on a Physical PC.
VM Spec:
Xeon CPU 3.33 GHZ (duo core)
Windows 7 64 Bit
4 GB of Ram
Physical PC:
Duo Core CPU (speed unknown at this time)
Windows 7 64 Bit
6 GB of Ram
My Question is, what is the best way to run some sort of benchmark test with VS2010 to determine what has the best performance?
Thanks.

Real life benchmarking is easy to do: Take a project you are working on (or any project similar to it in structure), and measure a time of a full rebuild.
If the project does not take long enough for the results to be representative or interesting, then I say why would you care about the performance at all?
As project by different teams differ a lot (some use more templates, some more of complex and difficult to optimize expressions, some lots of small files, some lots of libraries ..., some C++, some C#), I doubt there could exist a "universal benchmark project" useful enough to you. Taking the real project your developers are working on is the most representative you can do.
If you want just to have some rough "order of magnitude" comparison, you can simply download some large enough open source project in the same language as you do. E.g. for C you might want to try something like OGG library source or LibPNG source.

Related

What is the quantifiable benefit of 64 bit for Visual Studio Code over 32 bit

I'm not a hardware guy, but I know that Visual Studio in a 64 bit version issue request was declined by Microsoft stating that a 64 bit version would not have good performance.
Two noticeable differences between the two that I feel are obvious is the code base. One began it's life in 1997, one would think that means more baggage on the Visual Studio side, less opportunities to have very modern application architecture and code and that may make it harder and possibly stuff may be built to perform on 32 bit and for some reason is not suitable for 64 bit? I don't know.
Visual Studio Code on the other hand is an modern Electron app which means it pretty much just compiled HTML. CSS and JavaScript. I'm betting making a version of Visual Studio Code has little in the way of obstructions and although performance may not be something truly noticeable, why not?
P.S.
I still would like to understand what areas may be improved in performance and if that improvement is negligible to the a developer. Any additional info or fun facts you may know would be great I would like to have as much info as possible and I will update the question with any hard facts I uncover that are not mentioned.
The existence of 64-bit Visual Studio Code is largely a side-effect of the fact that the Node.js- and Chromium-based runtimes of Electron support both 32- and 64-bit architectures, not a primary design goal for the application. Microsoft developed VS Code with Electron, a framework used to build desktop applications with web technologies.
Because Electron already includes runtimes for both architectures (and for different operating systems), VS Code can provide both versions with little additional effort—Electron abstracts the differences between machines from the JavaScript code.
By contrast, Microsoft distributes much of Visual Studio as compiled binaries that contain machine-specific instructions, and the cost of rewriting and maintaining the source code for 64-bits historically outweighed any benefits. In general, a 64-bit program isn't noticeably faster to the end user than its 32-bit counterpart if it never exceeds the limitations of a 32-bit system. Visual Studio's IDE shell doesn't do much heavy-lifting—the bulk of the expensive processing in a typical workflow is performed by the integrated toolchains (compilers, etc.) which usually support 64-bit systems.
With this in mind, any benefits we may notice from running a 64-bit version of VS Code are similar to those we would see from using a 64-bit web browser. Most significantly, a 64-bit version can address more than 4 GB of memory, which may matter if we need to open a lot of files simultaneously or very large files, or if we use many heavy extensions. So—most important to us developers—the editor won't run out of memory when abused.
While this sounds like an insurance policy worth signing, even if we never hit those memory limits, remember that 64-bit applications generally consume more memory than their 32-bit counterparts. We may want to choose the 32-bit version if we desire a smaller memory footprint. Most developers may never hit that 4 GB wall.
In rare cases, we may need to choose either a 32-bit or 64-bit version if we use an extension that wraps native code like a DLL built for a specific architecture.
Any other consequences, positive or negative, that we experience from using a 64-bit version of VSCode depend on the versions of Electron's underlying runtime components and the operating system they run on. These characteristics change continuously as development progresses. For this reason, it's difficult to state in a general manner that the 32-bit or 64-bit versions outperform the other.
For example, the V8 JavaScript engine historically disabled some optimizations on 64-bit systems that are enabled today. Certain optimizations are only available when the operating system provides facilities for them.
Future 64-bit versions on Windows may take advantage of address space layout randomization for improved security (more bits in the address space increases entropy).
For most users, these nuances really don't matter. Choose a version that matches the architecture of your system, and reserve switching only if you encounter problems. Updates to the editor will continue to bring optimizations for its underlying components. If resource usage is big concern, you may not want to use a GUI editor in the first place.
I haven't worked much on windows but have interacted with x86, x64 and ARM (Both 32-bit and 64-bit instruction set size) processors. Based on my experience, before writing the code in 64-bit format we thought: Do we really need 64-bit size instructions? If our operation can be performed within 32 bits, then why shall we need another 32 bits?
Think of it like this: You have a processor with 64-bit address and 64-bit data buses and 64-bit size registers. Almost all of the instructions of your program requires maximum 32 bits. What will you do? Well, I think there are two ways now:
Create a 64-bit version of your program and run all the 32-bit instructions on your 64-bit processor. (Wasting 32-bits or your processor in each instruction cycle, and filling the Program Counter with an address which is 4 bytes ahead). Your application / program which could have been executed in 256 MB of RAM now requires 512 MB, due to which other programs or processes running on the RAM will suffer.
Keep the program format to 32-bit and combine 2 32-bit instructions to be pushed into your 64-bit processor for execution.
Obviously, second approach will run faster with the same resources.
But yes, if your program is containing more instructions which are really 64-bit in size; For eg. Processing 4K videos (Better on 64-bit processor with 64-bit instruction set) or performing floating-points operations with up to 15 decimal digit precision, etc. Then, it is better to create 64-bit program file.
Long story in short: Try to write compact software and leverage the hardware as much as possible.
So far, what I have read Here, Here and Here; I came to know that most of the components of VS require only 32-bits instruction size.
Hope it explains.
Thanks
4 years later, in 2021, you now have:
"Microsoft's Visual Studio 2022 is moving to 64-bit" from Mary Jo Foley
It references the official publication "Visual Studio 2022" from Amanda Silver, CVP of Product, Developer Division
Visual Studio 2022 is 64-bit
Visual Studio 2022 will be a 64-bit application, no longer limited to ~4gb of memory in the main devenv.exe process. With a 64-bit Visual Studio on Windows, you can open, edit, run, and debug even the biggest and most complex solutions without running out of memory.
While Visual Studio is going 64-bit, this doesn’t change the types or bitness of the applications you build with Visual Studio. Visual Studio will continue to be a great tool for building 32-bit apps.
I find it really satisfying to watch this video of Visual Studio scaling up to use the additional memory that’s available to a 64-bit process as it opens a solution with 1,600 projects and ~300k files.
Here’s to no more out-of-memory exceptions. 🎉

Most simple architecture available as GCC target

I'm looking for CPU architecture, which is supported by GCC (and is still maintained) for which is easiest to implement software simulator.
It should be something simple, with flat memory model, 16bit+ address space, 16-32 bit ALU and good code dencity is prefered as for it will be running programs with program memory limitations.
Just few words about origin of those requirements. I need virtual CPU for running 'sandboxed' programs. That will be running on microcontrollers with ~5 KBytes RAM, ARM CPU ~20 MHz clock speed.
Performance is non an issue at all, what I really need is writing C/C++ programs and then running them in sandbox without stdlib. For writing programs GCC can help, just need implement vcpu for one of target architectures.
I've got acquainted with ARMv7-m, avr32 references and found them pretty accaptable but some more powerfull then I need. The less/simpler code I need to write for vcpu implementation, the sooner I will have what I need and less bugs will be there.
UPDATE:
Seems like I found what I need. Is was already answered here: What is the smallest, simplest CPU that gcc can compile for?
Thank you all.

Does Visual Studio 2012 utilize all available CPU cores?

I am planning to build a new very fast developer computer under Visual Studio 2012 and Windows 7 64 bit. I am getting all fast components like SSD's and 16G RAM. I was wondering if Visual Studio 2012 is built to utilize all available CPU cores. I am trying to make a decision whether to get an expensive 6 core I7 CPU or a lesser expensive quad core CPU in terms of whether they make a difference in compile time since that's what takes the most time when I am not coding.
Note: There's a similar post from 2009 but I wanted to know if VS2012 has much better performance than VS 2010 in terms of cores utilization.
I am balking at the $1000+ price of the I7 Extreme.
I would recommend getting a non extreme latest generation i7 with a decent SSD and double the RAM. If you trace to the file system what visual studio is actually doing, you will see that it is reading and writing a great number of files. Much of these files are cached during the second build in a row, but an SSD and enough RAM seems to be the most important speed-up component in the equation and a quad core i5 or i7 is sufficient preferably with hyper-threading and VT technology in case you want to run 64 bit virtual machines later.
I have also noticed decent compilation performance upgrades from changing an old computer from IDE to AHCI in the bios following the proper guide.
Visual Studio has the option of choosing number of maximum parallel builds - they use as many CPU cores as you wish.
However, you shouldn't really focus too much on this option when deciding which CPU to buy. Newest Intel processors use the Turbo Boost to speed up processes which are using less than maximum number of cores. Also, the Extreme Series are very expensive without much of a performance gain. The lower models can usually be overclocked and match the more expensive models.
If you really feel, however, that your compilation times are too high, you should take a look at IncrediBuild - I've used it myself and must say that it really speeds up the build process. It is able to really understand your system specification and use all the resources possible, and also use remote build servers as well.
If you are building large C++ projects you need as many cores as you can get!
For example building Qt 4.8.3 did take less than two hours on my eight core machine with a SSD drive - o an 2 core machine with an HDD it took more than 20 hours
With BlueGo you can measure how long it takes to build Qt or boost on your system - so you can use it as a benchmark to find out how well suited your system is for building large C++ projects.
Why does memory matter so much when VS 2012 is only 32 bit? Maybe if you have 10 different project open. Otherwise 8 GB should be more than enough right? Maybe ram speed is the more important, but i also read that anything over 1600MHz is a waste. I'm guessing the best thing is a SSD, and the PCI Express card SSD like OCZ makes would be the best thing.

Why are xcodebuild and Xcode 4.2 so slow?

I am using Xcode 4.2 on a relatively large project (a few ten thousand lines of code) and it is horribly slow. Editing is ok, but whenever I try to compile the project (in Xcode, or with xcodebuild on the command line), my machine (quad core i7 MacBook Pro, 4 GB RAM) crawls to a halt. I have noticed that directly after starting xcodebuild, it spawns more than 8 clang processes, without the "real" compile processes starting. No xcodebuild output is so far seen on stout. I've tried reducing the number of parallel build processes, but still lots of clang processes are launched at the beginning. The project uses 6 or 7 direct dependent external projects and has maybe 120 source files. Under Xcode 3.2 the project used to be compiled very quickly. What's happening? And how can I make Xcode fast again?
Most of us have three primary options:
Revert to Xcode 3 for daily development.
Throw more hardware at it.
Change your projects' structures and apply large scale development tricks (even though 20-30 KSLOC is not large).
The easiest solution is revert to Xc3. Yes, Xc4 requires a lot more than Xc3; memory, CPU, and disk space and I/O. You will have to determine where your biggest problems are to reduce the amount it affects you.
I recently bought a new MBP with twice the physical cores and twice the physical memory, upgraded to Lion and upgraded Xc4 at the same time. The compilation times did improve, but much of the rest is actually slower, and far more resource hungry. That's not at all what one would expect from an IDE which also disallows multiple open projects and also uses a unified workspace view.
The move to Lion + Xc4 more than doubled my hardware demands in all of the following categories:
Memory
4GB is now too little for most nontrivial projects using Xc4 and Lion. You can still reduce this. I have 8GB and 10GB on my main 2 machines, Xc4 consumes it all quite easily (but my projects are more complex than yours, unless you write reeaeaaaally long lines). Anyways, You can reduce this problem by:
Buying more memory.
Disable indexing if you are building out huge projects in Xcode. This can halve Xcode's memory consumption.
Running Xcode in 32 bit. This is not an option for everyone, because it will exceed 4 GB in larger projects.
Reduce the number of build processes (again).
Restarting Xcode often (It doesn't do a very good job cleaning up after itself).
Use clang as your compiler. Clang instances in general use less memory than Apple's GCC 4.2.
Offload dependent targets which do not change often. Example: You will not need to rebuild third party libraries daily in most cases.
CPU
Xcode 4 uses a new (more accurate) completion parser.
Pare down your include dependency graphs, and dependencies. This is quite easy to with Obj-C, since every Obj-C instance is a pointer. Example: remove loosely dependent framework includes from your headers.
Separate your dependencies and modules properly. Develop libraries, but try to make them fairly small and be aware of the dependencies they will add. Example: I lead a project where a dev added a small feature (less than 1% of the app), but due to the number of dependencies it required (e.g. Three20 and then a few more), the binary size of the final executable doubled (and the build times went up, and the parser had a lot more work to do). Most of this was not needed for the feature - they just could not be stripped because they were objc symbols.
Reduce translation counts, if possible.
Optimize how you use prefix headers. You can share them, and you can create them for no good reason. This benefits the compiler more than the IDE.
Minimize memory usage. GC work (including all NSObject allocs) consumes over 1/3 of the CPU usage in larger projects, but it still spends a lot of time collecting when its heap is huge.
Use external text editors and VC clients. Pretty obvious because Xc4 displays the SBBOD rather often in large projects.
Minimize language complexity. In order of complexity: C, ObjC, C++, ObjC++. The more complex the translations, the longer it will take to parse and compile your sources, especially when your dependencies are high. If you can easily set up language barriers in your dependencies, do so.
You can disable code sense indexing via defaults (also reduces memory demands).
Hard Disk
This can be a speed/size balance.
Buy a faster one (e.g. SSD).
Cleanup and minimize your header dependencies.
Use a RAM Disk, such as Make RAM Disk.
Buy more memory. With the amount Xc4 consumes, it ends up swapping out to disk often in large projects.
Optimize your builds to use pch files appropriately. This is not always the obvious direction: I have not used them for several years in large projects.
Clear out the temp files Xcode and Instruments leave behind, they can be huge. In some cases, you can save them in customized locations. If they do consume tens of GB and your build dir is the same as your boot dir, then you will make your disk work a lot less by cleaning them up regularly.
Builds
In Xc4, xcodebuild parallel to Xcode now doubles the work (separate build dirs by default). In Xc3, the default was to build to the same destination. Verify your settings - Xcode will do a ton of redundant building, if you allow it (e.g. one target per workspace/config, rather than Xc3's flat build model).
Or just fill a drawer with quad core MacMinis and use that as a dedicated or distributed builder.
File Bugs
(self explanatory)
One more possible solution that in some cases might help speed up Xcode 4: In my case, the main problem seems to have been that accidentally four files from my build/ folder had been checked in with my git repository. During compilation Xcode notices that the build folder changed, and triggers git. Since the build folder contains thousands of files in my case, the performance went down. Removing the build/ folder completely from git (shouldn't have been checked in anyway) reduced the compilation times and system load massively. Performance is still slower than with Xcode 3, but much better than before.
You can switch on Distributed Building in XCode Preferences, and find some friendly person who will help you build your app by forming compilation machines cluster with you.
The funny thing is that even he is off, your compiler still uses different algorithm/mechanism to build you app in a blazing speed if compared to the problems before ;)
So, that means that they at Apple have forgotten about lonely programmers who don't work in teams and therefore lonely compilation scenario is purely tested in versions 4.0 - 4.2
Quick Note Regarding 'Throw more hardware at it' approach..
SUMMARY: I experienced a SMALL speed increase from making a SIGNIFICANT hardware upgrade
Test: Build/Run the exact same project on cloned macbooks (where the only difference should be their hardware)
Old Macbook Air (1.86GHZ Core 2 Duo ONLY 2GB RAM)
vs
Brand New Macbook Pro (2.3GHZ Core i7 8GB RAM)
BUILDING ON IPHONE 3GS
Macbook Air 1:00 - 1:15
Macbook Pro ~1:00
=> 0 to 0:15 of speed increase
BUILDING ON IPHONE 4S
Macbook Pro ~0:35
Macbook Air ~0:50
=> ~15 seconds of speed increase
**Partially tested: There DOES apear to a significant difference between build times for the SIMULATOR between the 2 machines
Another culprit for slowness is plugins. The Subversions plugin was absolutely killing my Xcode performance. I followed the instructions in this SO post to disable it. WHEW!

How can i access the Intel CPU Counter

Is there any small tool that gives me access to the data gathered by the Intel CPU Counters (like L1/L2 cache misses, branch prediction failures ... you know there are hunderts of them on modern Core2 CPU's).
It must work on Windows (while being able to use it with Solaris, FreeBSD, Linux, MacOSX would of course be nice).
Check out the Intel PCM (Performance Counter Monitor) tool which does exactly what you want to do.
Link: https://software.intel.com/en-us/articles/intel-performance-counter-monitor-a-better-way-to-measure-cpu-utilization
Intel PCM provides a rich API that allows you to instrument your code. Furthermore, to date, PCM is the only tool to read uncore events too.
This thread seems a little old but if you're still interested, I wrote a howto recently on this topic using nothing more than rdmsr and wrmsr in Linux. It only deals with the performance counters on an Intel uncore for Westmere, but the process I described might help you figure out what you need if you haven't already. I'm sure Windows has some equivalent program or function call to RDMSR and WRMSR. The problem is you need to be ring 0 (kernel mode) to read MSRs. I have no idea how to do that in Windows. I won't be able to help with any Windows questions but may be able to answer some MSR-related questions if you have any. I'm by no means an expert though.
PAPI is a very promising lead, however, I believe they discontinued support for Windows (and therefore .NET C#) quite a few years ago.
On the windows front, Visual Studio 2010 Premium comes with performance explorer. If you run any project or binary in instrumentation mode, you can get access to hardware events such as instructions retired.
The results can be somewhat mixed and inconsistent depending external factors, but it integrates with Visual Studio nicely and you get detailed counts (avg, maximum, total) on a per method/module level.
Intel V-tune performance analyzer also exposes these natively. I haven't played with this tool yet but it might be a more flexible API than what Visual Studio 2010 exposes.
You didn't write of your are looking for a application or for a library.
For Windows there is Intel VTune. But this not exactly an small tool. For linux I have used oprofile, which works without kernel patches.
On OS X, Shark lets you get data from the PMCs. I'm not sure what's available on Windows other than Intel's tools (VTune, as mentioned by drhirsch).
Try this
http://icl.cs.utk.edu/papi/
It is a full library that allows you to read any CPU counters data, works both on Windows and Linux [and other OS]
This thread looks pretty old. But still, all the above mentioned counters are available at Intel PCM .These counters can be used as a Microsoft Perfmon plugin or a command prompt interface. The Intel PCM gives informations like L2 and L3 cache hit ratio, cache misses etc.

Resources