How can I make MSBuild use less CPU resources - visual-studio

This might be a bit of a weird question, but I'd like to make MSBuild slower.
At work I have the problem of having a rather slow project (compile time around 15 minutes) and currently only having one node on Jenkins that builds the project.
I am now trying to figure out how to use our own working machines as additional nodes if we so choose. My problem is now, that if I run MSBuild on the project, it uses every core of the CPU at 100%, which makes my system quite unusable during that time.
I'd like to throttle MSBuild a bit to not use every core to its fullest. Is that possible?
There is the switch "/maxcpucount[:numberOfProcessors]", but even if I use it I don't see any difference in CPU usage.
Can anybody help me here?
Screenshot of CPU usage with /maxcpucount:1

Okay. Finally found the correct phrase for searching and found https://developercommunity.visualstudio.com/idea/436208/limit-cpu-usage-of-visual-studio.html
Apparently there is now a switch in MSBuild, which sets the thread priority to low (-low). It still uses 100%, but the PC is at least usable. And through some environment variables it seems possible to limit the number of processors used, but that seems to be experimental still.
Also after much looking around I found this:
Pass /MP option to the compiler using MSBuild
Where the option /p:CL_MPCount=2 is used.

Related

Why isn't some resource maxed out to 100% during Visual Studio compile?

I'm compiling a large solution of about 100 C++ projects with Visual Studio. During compilation, neither Memory, CPU, Disk, nor Ethernet are utilizied to anywhere near 100% (according to the Task Manager Performance tab). CPU is often as low as 25% and Memory Disk utilization seems to be as low as 5-10%.
So if no resource is utilized at 100%, what's the bottleneck? What's limiting my compile speed? I was honestly expecting it to be CPU. But it seems that it's not.
Am I perhaps measuring incorrectly? What should I expect to be the limiting resource when compiling? How can I speed things up? If there's something else that is the limitation (like RAM but as I/O via a cache or something) then what's the right tool/method to measure the bottleneck?
Additional Info: I'm certainly using maximum number of parallel projects to build = 8. Also multi-processor compilation is enabled for all the Visual C++ projects. My machine has 8 logical processors. So I really think that I'm not just maxing out one core. That would present itself as 12.5% usage on my machine, (and I see that often with single-threaded applications.)
well memory wise maybe your application don't use as much memory.
and as for the CPU usage, your program might be working on one thread, or to be more specific, on one single core of your CPU;
so if you have a quad core CPU, your application won't use anything above 25%.
as for the internet usage, i think that the task manager shows you computer's Ethernet capability, so maybe you have an internet speed of 10 Mb/s, but your Ethernet is capable of 50 Mb/s.
this is a link that i just looked up :https://askleo.com/why_wont_my_program_use_more_than_25_of_the_cpu/
great question.
Just setting the compilation to run all projects in parallel you just get the same result as #VasiliyGalkin, too much work for your setup.
But due to the way VS compiles each project you need a certain overlap so limit the number of parallel projects to 2-3 depending on the actual PC you run it on. If your PC is a monster with 16+ cores you might be able to go 1-2 up. You might be happy about the result or find that it doesn't fully use your CPU due to other limits in VS.
This article gives an in depth analysis of why its slow, the conclusion is that you need to set up your compilation to fit VS idea of the world.
A brief of the article
I would guess your setup is something like
Multiprocess compilation off
Giving you the following performance
Setting it to on and setting
enable minimal rebuild off
Gives you
Still for one project because your compilation times for your units is like this
Due to different compilation flags / precompiled headers see article for more. Fixing it gives you something like
and the 3 progression after each other
Now add max project 2 or 3 to use all capacity.
Ideally VS should have offered an option of using X threads so the problem would mostly go away as no more threads are started than usable and it would just pick the next task from the next project when there are free resources.
Memory is very unlikely to become a bottleneck for the compilation. Normally it's CPU-intensive process. The fact that your CPU is used for 25% instead of 100% may indicate that your projects are compiled sequentially using 1 core out of 4.
Try to check the "maximum number of parallel builds" in Visual Studio menu Options -> Projects and Solutions -> Build and Run.
You may find the screenshots attached to my recent question related to similar topic, but opposite problem - too much going in parallel instead of too little :)

How to reduce the size of Qt build?

I'm trying to build Qt with visual studio (2010), but the build has so far taken up over 16GB of my tiny hard drive. I've already uninstalled practically every program I have.
Is there any way I can compile qt without it hogging so much space, yet still get every feature? And once the build completes, will the unrequired files clear up, or do I have to do that manually? How big will the build be (as said earlier, I've already reached 16+ GB)?
I'm new to this, so please speak in layman's terms. Thanks.
Start with 40-50GB of free space. The build generates a lot of temporary files which you can clean later manually.
If that is too much for your computer, get an external harddisk. 1000GB should be less than $100.
You can reduce it's size by avoiding Qt-WebKit,etc(if you want). Check this link

Program is slower when compiled

Any suggestions on why a VB6 program would be slower when compiled than when running in the debugger? I'm compiling it with "Optimize for fast code."
Notes:
I measure performance by running the compiled version and the non-compiled version on the same machine. I based my predictions on wall-clock time, since 30 minutes vs. 100 minutes is a big enough difference to be visible.
Several months ago, I configured a debugging tool to attach itself to my program whenever it ran. I totally forgot that I had done this.
Special thanks to Process Monitor for making this very obvious.
Turning it off made the program run fast.
AppVerifier, for those who are curious.
You should select the compile to Native Code option
The compile to P-code option forces your program to run in an interpreted mode, which can be slower.
There are some optimizations in the advanced section. Try them out too.
Some more points to consider:
Are you running the compliled application in the same environment? Is it taking the same data as input?
How did you know that it is slow? What if your timing program is wrong?
How do you measure the performance?
It is hard to measure the performance by what you just said. You have to ensure the running environment must be exactly same for compare the performance?
Are you running on the same machine? Do you connect to DB? Does DB has the same work load at different run? You need isolate other factors before reaching such a decision.

How can I determine why a build runs slowly in Visual Studio 2005?

I wanted to know if it is possible to know why a Visual Studio 2005 (MSBuild) build is taking a long time to build a project.
Suddenly we are getting 7-minute build times on some computers, while others take less, such as 4 minutes.
So I think I need to identify changes that were made to the project and are causing a longer build time.
Any ideas on how I can do that?
Take a look at MSBuild Profiler to analyze where the slow down is. Based on that information, dig into what each task is doing and factor in the things that Chris mentions in his answer.
Is it a C++ project? I had the same problem when I moved my project from Visual Studio 6.0. Turning off the Code Optimization did save a lot of time. It was almost impossible to work with activated Optimization.
It re-evaluates the references on every build. If any of your references are on network drives that could be slowing it up.
Some computers are naturally going to perform builds faster than others.. This is, in part, a function of processor, ram, and HD speeds.
Regarding why you see 7 minute build times there could be any number of reasons. Amount of code in project(s). Number of projects in solution. Amount of post / pre build steps. Speed of network in downloading anything from source control it needs. Number of other processes running on your computers. Amount of RAM and other resources available to perform the builds..
You get the idea.

Operating System Overheads while profiling?

I am doing profiling of a C code in Microsoft VS 2005 on a Intel Core-2Duo platform.
I measure the time(secs:millisecs) counsumed by my function. But i have some doubts about the accuracy of this measurement as the operating system will not continuously run my application, but instead schedule others apps/services in between the execution of my code.(Although i have no major applications running while i do the profile run, still windows will have lot of code of its own which it will run by preempting my app.). Because of all this i believe the profiling number(time taken by my app to run) is not accurate.
So my question is there any way to find out the Operating system overheads, scheduling overhead on a typical windows system(I run Windows XP)e.g. if my applications says it ran for 60 milliseconds, out of that 60 msec, how much time really was used by my app. and how much time it was sitting idle, due to being pre-empted by some other task scheduled by the OS?
or
Atleast is there any ball-park number to get such OS overhead, based on your experience you came across while doing something similar?
#Kogus: Even if i run outside debugger(standalone app. from a command prompt) it still could be preempted by OS and cause a incorrect measurement of the time consumed by my app.
Is'nt it?
-AD
I think you are going to have some problems with the granularity. See similar questions GetLocalTime() API time resolution and Is gettimeofday() guaranteed to be of microsecond resolution?
Also, you may want to take a look at the Windows Resource Kits Tools which include timeit.exe (similar to time on unix/linux) to give you elapsed and process times.
Suggestion
Try run on multi CPU systems.
The best way of doing this is a dedicated profiling tool. There are lots out there. I haven't used one for C for a few years, someone else will hopefully be able to give better advice. As you are using Visual Studio 2005 this might be a good place to start:
AQ, but I've never used it.
1 - Put some debug logging in your code (include timestamps of course), and run it outside of the debugger
2 - Run again in the debugger
3 - Repeat many times, to get statistically valid data.
4 - Compare.
If there is a significant difference in the average execution time of the standalone vs. the debugger, then you are right to be suspicious of the OS (or the overhead of the debugger hooks themselves...). If no difference, then don't sweat it.
Edit0: Obviously the debug messages have some overhead of their own. You may want to leave those in the code even when you are running from the debugger. That way, both the standalone and the debugger are running the very same code.
Edit1: I misunderstood the question. I thought your concern was that --while debugging--, the OS might interrupt your app more frequently than in a normal mode of execution. If you want to know how much time your app actually spent working, just compare the time taken to the "CPU Time" in the Task Manager.
Edit2: Compare the time returned by GetProcessTimes for your process to the actual execution time. The difference is the time spent by the CPU on somebody else.

Resources