Two of my projects require gigantic amounts of RAM for compiling.
I already figured out, how to make certain, that they do not build in parallel
-- by specifying build order.
Now I also want to make certain, that there are not multiple .cpp files being compiled from the same project in parallel with the one which is critical.
Anybody any clue?
But I definitely want to keep the parallel build option enabled for all the other projects.
Use the /MP argument to the build, as described here.
Related
What logic does Xcode's modern build system use to sequence or parallelize build phases? I realize that input/output files can be defined to sequence interdependent build phases, but is that the only consideration?
A few of the more complex projects I work on have as many as 10 run-script build phases. While I'd like to benefit from the fact that some can run in parallel, we previously leveraged the legacy build system's respect for top-down sequencing to ensure things happen in order. Are there any simple ways to ensure sequencing that don't rely on input/output files?
Simply put, there's no great way.
I wound up doing a couple of things to reduce complexity:
Consolidate run-script build phases into two: pre-compile, post-compile
Utilize multiprocessing in Python to run groups of scripts in sequence (builds on top of in-house tooling)
I still define "output files" generated during either phase - for instance to ensure that a script-generated (or retrieved) file might be copied into the product bundle. It has reduced overhead, and improved build-times. I still think Apple needs to implement an improved mechanism for sequencing build run-script phases.
So, I have an interesting situation. I have a group that is interested in using CI to help catch developer errors. Great - we are all for that. The problem I am having wrapping my head around things is that they want to build interdependent projects isolated from one another.
For example, we have a solution for our common/shared libraries that contains multiple projects, some of which depend on others in the solution. I would expect that if someone submits a change to one of the projects in the solution, the CI server would try to build the solution. After all, the solution is aware of dependencies and will build things, optimized, in the correct order.
Instead, they have broken out each project and are attempting to build them independently, without regard for dependencies. This causes other errors because dependent DLLs might not yet exist(!) and a possible chicken-and-egg problem if related changes were made in two or more projects.
This results in a lot of emails about broken builds (one for each project), even if the last set of changes did not really break anything! When I raised this issue, I was told that this was a known issue and the CI server would "rebuild things a few times to work out the dependencies!"
This sounds like a bass-ackwards way to do CI builds. But maybe that is because I am older and ignorant of newer trends - so, as anyone known of a CI setup that builds interdependent projects independently? Any for what good reason?
Oh, and they are expecting us to use the build outputs from the CI process, and each built DLL gets a potentially different version number. So we no longer have a single version number for the output of all related DLLs. The best we can ascertain is that something was built on a specific calendar day.
I seem to be unable to convince them that this is A Bad Thing.
So, what am I missing here?
Thanks!
Peace!
I second your opinion.
You risk spending more time dealing with the unnecessary noise than with the actual issues. And repeating portions of the CI verification pipeline only extends the overall CI execution time, which goes against the CI goal of reducing the feedback loop.
If anything you should try to bring as many dependent projects as possible (ideally all of them) under the same CI umbrella, to maximize the benefit of fast feedback of breakages/regressions on any part of the system as a whole.
Personally I also advocate using a gating CI system (based on pre-commit verifications) to prevent regressions rather than just detecting them and relying on human intervention for repairs.
Currently, we build a group of static libraries prior to building our app. The issue is that for each library there is some variation of the ./configure, make , test sequence. I would like to be able to cache the results of the configure step to speed up the build, since it is common to build on the same platform multiple times. We are thinking about wrapping each step in the build process in an SCONS process, but we're not sure that this would work. Any ideas?
You could use scons to wrapper your configure and make script. As long as you enumerate all your dependencies and generated targets then scons would be able to determine whether to run each of the build steps or not. This sounds pretty complicated though. Why not convert your config and build flow to scons or write a simple makefile for this config, make dependency? There are ways to add MD5 hashing into make ( if you're drawn to scons because of its hashing of dependencies).
I sometimes have issues with the incremental rebuild on visual C++ (2003 currently ). Some dependencies does not seem correctly checked and some files aren't build when they should. I suppose thoses issues come from the timestamp approach to incremental rebuild.
I don't consider it a huge issue when building debug build on my desk, however for distribuable build this is a issue.
Is it safe to use incremental build for a build server or is a full build a requirement ?
You need a build you distrubute to be recreatable again should users experience problems that need investigating.
I would not rely on an incremental build. Also, I would always delete all source from the build machine, and fetch it from scratch from the source control system before building a release. This way, you know you can repeat the build process again by fetching the same source code.
If you use an incremental build, the build will build differently each time because only a subset of the system will need to be built. I think its just good to eliminate as many possible differences between release builds as possible. So, for this reason incremental builds are out.
It's a good idea to label or somehow mark the versions of each source file in the source control system with the version number of the build. This enables you to keep track of the exact source that went into building the release. With a decent source code control system the labels can be used to track down all the changes that were made to the code between one release and the next. This can be a real help when trying to track down a bug that you know was introduced between the two releases.
Incremental builds can still be useful on a development machine when you're not distributing the build, just for saving time during the code/debug/test/repeat development cycle.
I'd argue that you should avoid the "is it safe" question. It's simply not worth looking into.
The advantage of incremental builds is the cost saving. Small per build, it adds up for all the debug builds a developer typically makes, and then adds up further for all developers on your team.
On the other hand, release builds are rare. The costs of a release build are not in developers waiting on them, they're found much more in the team of testers that need to validate it.
For that reason, I find that incremental builds are without a doubt cost-saving for debug builds, but I refuse to spend any time calculating the small saving I'd get out of incrementally building a release build.
I would always prefer to do a full clean and rebuild for any release for peace of mind if nothing else. Assuming that you're talking about externally released builds and not just builds using the Release configuration, the fact that these builds are relatively infrequent means that any time saving will be minimal in the long run.
Binaries built using with the incremental build feature are going to be bigger and slower, even for release builds, because they necessarily contain scaffolding to enable the incremental builds that isn't necessary for fully optimized builds. I'd recommend you not to use incremental builds for this reason.
I'm currently in the process of setting up a continuous integration environment at work. We are using VisualSVN Server and CrusieControl.NET. Occasionally a build will fail and a symptom is that there are conflicts in the CruiseControl.NET working copy. I believe this is due to the way I've setup the Visual Studio solutions. Hopefully the more projects we run in this environment the better our understanding of how to set them up will be so I'm not questioning why the conflicts happen at this stage. To fix the builds I delete the working copy and force a new build - this works every time (currently). So my questions are: is deleting the working copy a valid part of a continuous integration build process, and how do I go about it?
I've tried solutions including MSTask and calling delete from the command line but I'm not having any luck.
Sorry for being so wordy - good job this is a beta :)
Doing a full delete before or after your build is good practice. This means that there is no chance of your build environment picking up an out of date file. Your building exactly against what is in the repository.
Deleting the working copy is possible as I have done it with Nant.
In Nant I would have a clean script in its own folder outwith the one I want to delete and would then invoke it from CC.net.
I assume this should also be possible with a batch file. Take a look at the rmdir command http://www.computerhope.com/rmdirhlp.htm
#pauldoo
I prefer my CI server to do a full delete as I don't want any surprise when I go to do a release build, which should always be done from a clean state. But it should be able to handle both, no reason why not
#jamie: There is one reason why you may not be able to do a clean build every time when using a continuous integration server -- build time. On some projects I've worked on, clean builds take 80+ minutes (an embedded project consisting of thousands of C++ files to checkout and then compile against multiple targets). In this case, you have to weigh the benefit of fast feedback against the likelihood that a clean build will catch something that an incremental build won't. In our case, we worked on improving and parallelizing the build process while at the same time allowing incremental builds on our CI machine. We did have a few problems because we weren't doing clean builds, but by doing a clean build nightly or weekly you could remove the risk without losing the fast feedback of your CI machine.
If you check out CC.NET's jira there is a patch checked in to implement CleanCopy for Subversion which does exactly what you want and just set CleanCopy equal to true inside your source control block just like with the TFS one.
It is very common and generally a good practice for any build process to do a 'clean' before doing any significant build. This prevents any 'artifacts' from previous builds to taint the output.
A clean is essentially what you are doing by deleting the working copy.
#Brad Barker
Clean means to just wipe out build products.
Deleting the working copy deletes everything else too (source and project files etc).
In general it's nice if you're build machine can operate without doing a full delete, as this replicates what a normal developer does. Any conflicts it finds during update are an early warning to what your developers can expect.
#jamie
For formal releases yes it's better to do a completely clean checkout. So I guess it depends on the purpose of the build.