After some pain and suffering, i managed to install everything necessary for MinGW to work on a computer not on the network.
It worked nicely for a couple days, but now I'm experiencing very long delays before anything starts to happen after i give the "make" command to build my project.
I tried disabling the network, as suggested here: Why is MinGW very slow?
But it didn't help.
Note that it's not the actual compilation / linking progress that is slow, but the startup of those processes seems to take forever. 5-10 minutes. Except if i just did it, then it starts in 10-30 seconds.
I know, it used to take a lot longer to load those tapes on Commodore, but over the years I have grown impatient.
Any ideas?
Try doing make -r (without implicit rules). For me it was a difference between 30 seconds and fraction of a second for single cpp file.
Explanation:
I've had the same problem MinGW make long ago. I've used make -d to investigate. It was then obvious that make uses a gazillion of implicit rules for every dependency file - if my file had dep on shared_ptr.hpp then make checked for shared_ptr.hpp(o|c|cc|v|f|r|.. and dozens other combinations). Of course those files didn't exist. It looks like checking for file mod time/existence (when it doesn't really exist) on Windows platform is a lot slower than on Linux (becouse with Linux i didn't see any difference with/without -r switch).
Related
Since some years ago I started using Qt in both Windows 7 as well as in Linux Ubuntu and it would always compile fast with MinGW being used for Windows. But in the last couple of years or so, maybe thanks to updates in the version of both Qt and MinGW, I started detecting a slow down in the compiling speed inside Windows. I did some research trying to find why MinGW had started to become so slow compared to Linux (it wasn't before!) and everything people told me was that MinGW was slower in Windows and that it would be better, if possible, to just use Linux.
Since I wanted to continue my project, I followed the suggestion and since I've being using Linux with relatively no problems. The situation now is that I must go back to Windows (now updated to Windows 10) to make visual corrections for this OS and I need to once again work with MinGW having to face the same problem as before.
But for some reason it seems that the slowness of MinGW became even worse! While before I at least was able to compile the app in around 4 minutes, now the last time I tried it took 38 minutes before I gave up and went to sleep - and this is for a project that takes only 1:03 minute to be compiled in Linux [under the same compile configuration]!
Well I'm still aware about the slowness of MinGW, but as a quick research around this problem on the web reveals, that is just too slow: all backtesting one can find in other threads here on SO reveals at best 2x-3x more time to compile a project, not 38x+!!
So I would like to know what kind of possible problems I might have in my Windows for this exaggerated slowness to happen. I know I ended up installing at least 4 different versions of MinGW; could this have brought the problem?
It's interesting also to notice that when compiling using the -j option and watching the Compile Output log in Qt Creator alongside Process Explorer, there are moments when the compiling simple pauses for 10 seconds or more and the CPU usage drops from its ~100% to close to 5% with nothing happening till it suddenly continues the compilation process. I'm sure this constant pauses are part of the above average time, but I have no idea why MinGW is showing this behaviour.
You might want to check where the time is spent.
There a lot of tools that allow you to capture what a certain process is doing, I name just two of them:
ProcMon
XPerf or its successor
But to analyze the reports generated by these tools you need a rather deep understanding. If this doesn't help temporarily disable other running services and program step-by-step (if you want to know which program causes the problem) or disable all of them at once.
Looking at the spikes of cpu usage that TaskManager or Procexp by sysinternals show might help too to identify those components that block your cpu.
If your antivirus is the cause of the collision that makes the compile so slow you can define exceptions, then the antivirus will not scan certain programs or paths.
So perhaps it is easier to first try the compilation process with a disabled antivirus software or even from a clean live boot Windows CD.
I was compiling Qt5 for a embedded device on the device itself. This takes a long time since Qt sources are about 800mb and the embedded device isn't exactly fast.
Everything was running well, until a power shortage prevented the device from finishing make, therefore halting the compilation process.
Is there any way to resume from where it was left of?
If it's a well-formed makefile, simply re-running make should allow you to resume the process.
The make -t command mentioned (assuming gnu make) simply touches the files (updates the timestamps) and doesn't actually perform the actions in the makefile so at this point, you'll probably have to start over.
Also rather than building on the slow target, consider setting up a cross-compiler and build system. It's often a lot of work initially, but pays considerable dividends over time. I would recommend crosstool-ng as one of the least painful ways of setting up such an environment.
I am using kubuntu 10.10 with a 4 cores cpu. When I use 'make -j2' to build a cpp project, 2 core's cup usage become 100%, desktop environment become no response, and build procedure make no progress.
Version info:
The GNU make's version is 3.81
gcc version 4.4.5 (Ubuntu/Linaro 4.4.4-14ubuntu5)
How to resolve this problem? Thanks.
There's not really enough information here to give you a definitive answer. First it's not clear if this happens only when you run with -j2. What if you run without parallelism (no -j)? When you say "2 core's CPU usage [goes to] 100%", what is happening on those CPUs? If you run "top" in another terminal and then start your build, what is showing in top?
Alternatively, if you run "make -d -j2" what program(s) is make running right before the CPU goes to 100%?
The fact that the desktop is unresponsive as well hints at some other problem, rather than CPU usage, since you have 4 cores and only 2 are busy. Maybe something is chewing up all your RAM? Does the system come back after a while (indicating that the OOM killer got involved and stomped something)?
If none of that helps, you can run make under strace, something like "strace -f make -j2" and see if you can figure out what is going on. This will generate a metric ton or two of output but if, when the CPU is pegged, you see something running over and over and over you might get a hint.
Basically I can see these possibilities:
It's not make at all, but rather whatever command make is running that's just bringing your system down. You imply it's just compiling C++ code so that seems unlikely unless there's a bug somewhere.
Make is recursing infinitely. Make will rebuild its own makefile, plus any included makefile, then re-exec itself. If you are not a bit careful defining rules for rebuilding included makefiles make can decide they're always out of date and rebuild/rexec forever.
Something else
Hopefully the above hints will set you on a path to discovering what's going on.
Are you sure the project is prepared for parallel compilation? Maybe the prerequisites aren't correctly ordered.
If you build the project with just "make" the compilation finish? If it gets to the end is a target dependency problem.
I am aware that there are a couple of questions that look similar to mine, e.g. here, here, here or here. Yet none of these really answer my question. Here it goes.
I am building a modified version of a Chromium browser in VS2008 (mostly written in C++). It has 500 projects in one solution with plenty of dependencies between them. There are generally two problems:
Whenever I start debugging (press F5 or green Play button) for the first time in a session the VS freezes and it takes a couple of minutes before it recovers and actually starts debugging. Note that I have disabled building before running, because whenever I want to build my project I use F7 explicitly. I do not understand why it takes so long to "just" start a compiled binary. Probably VS is checking all the deps and making sure everything up-to-date despite my request not to build a solution before running. Is the a way speed this one up?
Every time I perform a build it takes about 5-7 minutes even if I have only changed one instruction in one file. Most of the time is consumed by the linking process, since most projects generate static libs that are then linked into one huge dll. Apparently incremental linking only works in about 10% of the cases and still takes considerably long. What can I do to speed it up?
Here is some info about my software and hardware:
MacBook Pro (Mid-2010)
8 GB RAM
dual-core Intel i7 CPU with HT (which makes it look like 4-core in Task Manager)
500GB Serial ATA; 5400 rpm (Hitachi HTS725050A9A362)
Windows 7 Professional 64-bit
Visual Assist X (with disabled code coloring)
Here are some things that I have noticed:
Linking only uses one core
When running solution for the second time in one session it is much quicker (under 2-3 seconds)
while looking up information on VS linker I came across this page:
http://msdn.microsoft.com/en-us/library/whs4y2dc%28v=vs.80%29.aspx
Also take a look the two additional topics on that page:
Faster Builds and Smaller Header Files
Excluding Files When Dependency Checking
I have switched to the component build mode for Chromium project, which reduced the number of files that need to be linked. Component build mode creates a set of smaller DLLs rather than a set of static libraries that are then linked into huge chrome.dll. Also I am using incremental linking a lot, which makes linking even faster. Finally linking for the second and subsequent times gets faster since necessary files are already cached in the memory and disk access is unnecessary. Thus when working incrementally and linking often, I get to as low as 15 seconds for linking of webkit.dll which is where I mostly change the code.
As for the execution it has same behavior as linking - it runs slow only for the first time and with every subsequent run it gets faster and faster until it takes less than 3-5 seconds to start the browser and load all symbols. Windows is caching files that are accessed most often into the memory.
I am working on a project using Google's cmockery unit testing framework. For a while, I was able to build the cmockery project with no problems. e.g. "./configure", "make && make install" etc. and it took a reasonable amount of time (1-2 minutes or so.) After working on other miscellaneous tasks on the computer and going back to re-build it, it becomes horrendously slow. (e.g. after fifteen minutes it is still checking system variables.)
I did a system restore to earlier in the day and it goes back to working properly for a time. I have been very careful about monitoring any changes I make to the system, and have not been able to find any direct correlation between something I am changing and the problem. However, the problem inevitably recurs (usually as soon as I assume I must have accidentally avoided the problem and move on). The only way I am able to fix it is to do a system restore to a time when it was working. (Sometimes restarting the machine works as well, sometimes it does not.)
I imagine that the problem is between the environment and autoconf itself rather than something specific in cmockery's configuration. Any ideas?
I am using MinGW and under Windows 7 Professional
Make sure that antivirus software is not interfering. Often, antivirus programs monitor every file access; autoconf accesses many files during its operation and is likely to be slowed down drastically.