To begin with, in order to use Unreal Engine 4 you have to build it using Visual Studio 2013.In other words, you are able to optimize compiler settings in order to optimize overall performance.
However, I am a little bit newbie when it come down to programming. So I need your help, what I should do enhance programs performance?
I was able to find in VS2013 how I could enable /arch:avx instruction sets. However, my CPU also supports XOP and FMA3 and FM4. How can I enable them?
Related
Using Visual Studio 2019...
I have som huge projects and it takes a long time to build them.
I want to buy a new CPU, should I go with a fast single core CPU (like the Intel Core i9-11900K) or should I chose a fast multicore CPU (like AMD Threadripper 3960X)?
Does VS2019 take advantage of multicore CPU when building/runinning projects?
Thanks.
As you wished, VS IDE does support multi-core build process.
VS first will get a basic performance evaluation based on your current CPU hardware. Then, you should open the switch under Tools-->Options-->Projects and Solutions-->Build and Run-->and you will see maximum number of parallel project builds.
Set the number of build process based on your CPU performance.
Obviously, it is better to use multi-core build.
Note: value 1 means single-core and you should expand the value to enable multi-core build.
If you build c++ projects, there is another second option under Tools-->Options-->Projects and Solutions-->VC++ Project Settings-->Maximum concurrent c++ compilation.
In this suitation, value 0 means all CPU will be used.
I recently changed my CPU to AMD Ryzen 3700X 8 cores and motherboard to B450 TOMAHAWK MAX MS-7C02.
The problem: Visual studio utilizes only up to 5-10% CPU.
What I tried:
Changing parameters in BIOS(Changed base frequency, turned of Cool'n'Quiet)
Turned off power saving mode in Control Panel settings
Tried CPU on "stress" software (HeavyLoad) - CPU runs 100% there.
I tried to search for similar cases and solutions, but I couldn't find anything related to a problem like that. It really bugs me and I'm not sure why such problem could occur.
Any help is welcome and very appreciated!
My guess is that Visual Studio is doing something that is not multithreaded and is therefore only using one core instead of all 8 cores.
Are you compiling code in VS? If so, there may be a VS setting to make the compiler use multiple threads.
I get about 3-4x times difference in computation time of a same CUDA kernel compiled on two different machines. Both versions run on a same machine and GPU device. The direct conclusion explaining the difference is different compiler settings. Although there is no single perfect setting and the tuning should be customized depending on the kernel, I wonder if there is any clear guideline for helping to choose the right settings. I use Visual Studio 2010. Thank you.
Compile in release mode, not debug mode, if you want fastest performance. The -G switch passed to the nvcc compiler will usually have a negative effect on GPU code performance.
It's generally recommended to select the right architecture for the GPU you are compiling for. For example, if you have a cc 2.1 capability GPU, make sure that setting (sm_21, in GPU code settings) is being passed to the compiler. There are some counter examples to this (e.g. compiling for cc 2.0 seems to run faster, etc.) but as a general recommendation, it is best.
Use the latest version of CUDA (compiler). This is especially important when using GPU libraries (CUFFT, CUBLAS, etc.) (yes, this is not really a compiler setting)
I'm working on diploma project that heavily uses mathematical calculations and should present some results in 3D. For these purposes I decided to use CUDA or OpenCL for parallel computation of mathematical part and, most possibly, OpenGL for presenting result. In addition, project should be able to be deployed on clusters (operated by MS Windows), for these purposes project supervisor recommended MPI.
My question is the following: where it is easier to combine all these components, in MS Visual tudio
Main part is CUDA + OpenCL + OpenGL, it will be the core of the project.
P.S. This question is not to star holy-war betwen Qt and MS Visual studio.
OpenCL is not limited to GPUs, it can be used for parallel programming in clusters as well. Intel for example provides a OpenCL implementation, that is aimed at multicore CPU and clusters.
So my recommendation is to use OpenCL for both GPU computing and clustering. MPI (Message Passing Interface) is mainly a way to communicate between tasks running on separate cluster nodes. It's not so much of a clustering framework by itself.
I am planning to build a new very fast developer computer under Visual Studio 2012 and Windows 7 64 bit. I am getting all fast components like SSD's and 16G RAM. I was wondering if Visual Studio 2012 is built to utilize all available CPU cores. I am trying to make a decision whether to get an expensive 6 core I7 CPU or a lesser expensive quad core CPU in terms of whether they make a difference in compile time since that's what takes the most time when I am not coding.
Note: There's a similar post from 2009 but I wanted to know if VS2012 has much better performance than VS 2010 in terms of cores utilization.
I am balking at the $1000+ price of the I7 Extreme.
I would recommend getting a non extreme latest generation i7 with a decent SSD and double the RAM. If you trace to the file system what visual studio is actually doing, you will see that it is reading and writing a great number of files. Much of these files are cached during the second build in a row, but an SSD and enough RAM seems to be the most important speed-up component in the equation and a quad core i5 or i7 is sufficient preferably with hyper-threading and VT technology in case you want to run 64 bit virtual machines later.
I have also noticed decent compilation performance upgrades from changing an old computer from IDE to AHCI in the bios following the proper guide.
Visual Studio has the option of choosing number of maximum parallel builds - they use as many CPU cores as you wish.
However, you shouldn't really focus too much on this option when deciding which CPU to buy. Newest Intel processors use the Turbo Boost to speed up processes which are using less than maximum number of cores. Also, the Extreme Series are very expensive without much of a performance gain. The lower models can usually be overclocked and match the more expensive models.
If you really feel, however, that your compilation times are too high, you should take a look at IncrediBuild - I've used it myself and must say that it really speeds up the build process. It is able to really understand your system specification and use all the resources possible, and also use remote build servers as well.
If you are building large C++ projects you need as many cores as you can get!
For example building Qt 4.8.3 did take less than two hours on my eight core machine with a SSD drive - o an 2 core machine with an HDD it took more than 20 hours
With BlueGo you can measure how long it takes to build Qt or boost on your system - so you can use it as a benchmark to find out how well suited your system is for building large C++ projects.
Why does memory matter so much when VS 2012 is only 32 bit? Maybe if you have 10 different project open. Otherwise 8 GB should be more than enough right? Maybe ram speed is the more important, but i also read that anything over 1600MHz is a waste. I'm guessing the best thing is a SSD, and the PCI Express card SSD like OCZ makes would be the best thing.