I have a project that consists of a binary and a set of static libraries it depends on. I'm trying to get the elapsed time for building each library without success.
I have tried to use AddPreAction()/AddPostAction() for calculating the elapsed time but AddPreAction() is called once all the source files the library depends on have been compiled (which makes sense).
There is a post related to this issue:
How can I measure the build time for each component of a scons build?
But I would prefer to look for a more elegant solution rather than overriding some environment variables and then have to parse the output in order to calculate the times.
Thanks in advance.
You can get timing information on various different aspects of the build using the --debug=time SCons command line option, as documented in the SCons man pages
Here's an excerpt, you can read the rest in the link provided above:
--debug=time
Prints various time profiling information: the time spent executing each
individual build command; the total build time (time SCons ran from
beginning to end); the total time spent reading and executing SConscript
files; the total time spent SCons itself spend running (that is, not
counting reading and executing SConscript files); and both the total
time spent executing all build commands and the elapsed wall-clock
time spent executing those build commands...
Related
I am trying to generate the Xcode build summary for my project so that I can optimize the bottlenecks. As per attached screenshot
Total build time it shows at the bottom is 135.3 seconds. While the first module CompileC takes 449.356 seconds. I know Xcode do some parallelization while building the project but I am not sure how it is calculating this summary time. Can anyone explain this?
I know this is old, but I was looking into this, and I came across this comment by Rick Ballard, an Apple Xcode build system engineer.
Yes – many commands, especially compilation, are able to run in parallel with each other, so multicore machines will indeed finish the build much faster than the time it took to run each of the commands.
In other words, the quoted numbers are core-seconds, not real time, except for the last one. So if you have six cores, your CompileC task might only take 449/6 = 75 seconds. You've got, maybe, 660 core-seconds, so you'd get about 110 clock-seconds, which looks about right versus 135 total time.
I would like to get the stats about cpu usage, memory consumption, filesystem related stuff and the time spent compiling the various stages and components / sublibraries ( plus other important bits ), after a successful build done with make when building gcc .
It's possible to get stats out of make ?
I don't know any tool which can do everything.
For a very basic overview (time spent in the process), use time make ....
If you need more details or exact figures, you need a profiler. For CPU usage, use gprof. For memory usage, you can use valgrind. For IO, you can use ioprofile or iogrind.
How would I get the current system time with the MIPS instruction set? I would like to benchmark some programs and would like to find the time in milli or nanoseconds that it takes for them to complete.
I am aware that I could run the assembly code from within C and time it with the C time libraries, however, I would like to do this is MIPS assembly alone.
Is there anyway to speed up the time it takes a run a make compile. We have a package that takes 12 minutes and looking to speed that up. Any flags to pass to make, or way to run it parallel.
Try running make -jN with N being the number of cores in your system if you haven't already.
Try using fewer compile time optimizations if it applies (avoid -O3 in particular).
You can also take a look at distcc.
I build a huge project frequently and this takes long time (more than one hour) to finish even after configuring pre-compiled headers. Are their any guidelines or tricks to allow make work in parallel (e.g. starting gcc in background, ...etc) to allow for faster builds?
Note: Sources and binaries are too large in size to be placed in a ram file system and I don't want to change the directory structure or build philosophy.
You can try
make -j<number of jobs to run in parallel>
make -jN is a must now that most machines are multi-core. If you don't want to write -jN each time, you can put
export MAKEFLAGS=-jN
in your .bashrc.
You may also want to checkout distcc.
If your project is becoming too big for one machine to handle, you can use one of the distributed make replacements, such as Electric Cloud.
If you want to run your build in parallel,
make -jN
does the job, but keep in mind:
N should be equal to the maximum number of threads your machine supports, if you enter a number greater than that, make automatically makes N=maximum number of threads your machine supports
make doesn't support parallel build using -jN in MSDOS, it just does a serial build. If you specify -jN, it will downgrade N=1.
Read more here, from the make source: http://cmdlinelinux.blogspot.com/2014/04/parallel-build-using-gnu-make-j.html