Xcode 10 build phase sequencing - xcode

What logic does Xcode's modern build system use to sequence or parallelize build phases? I realize that input/output files can be defined to sequence interdependent build phases, but is that the only consideration?
A few of the more complex projects I work on have as many as 10 run-script build phases. While I'd like to benefit from the fact that some can run in parallel, we previously leveraged the legacy build system's respect for top-down sequencing to ensure things happen in order. Are there any simple ways to ensure sequencing that don't rely on input/output files?

Simply put, there's no great way.
I wound up doing a couple of things to reduce complexity:
Consolidate run-script build phases into two: pre-compile, post-compile
Utilize multiprocessing in Python to run groups of scripts in sequence (builds on top of in-house tooling)
I still define "output files" generated during either phase - for instance to ensure that a script-generated (or retrieved) file might be copied into the product bundle. It has reduced overhead, and improved build-times. I still think Apple needs to implement an improved mechanism for sequencing build run-script phases.

Related

How can I disable parallel build for a certain project?

Two of my projects require gigantic amounts of RAM for compiling.
I already figured out, how to make certain, that they do not build in parallel
-- by specifying build order.
Now I also want to make certain, that there are not multiple .cpp files being compiled from the same project in parallel with the one which is critical.
Anybody any clue?
But I definitely want to keep the parallel build option enabled for all the other projects.
Use the /MP argument to the build, as described here.

Order of independent Build Configs in TeamCity

I'm migrating our build system over to TeamCity and, because we have quite long build times, I'm trying to make good use of parallelism in build configurations.
If two configs can run in parallel they are obviously not dependent on each other. However there are some cases where, if two parallel builds are serialised (due to lack of available agents) then I would prefer one to run ahead of another (for example one is a set of regression tests that I'd like to see the result of before a packaging step is run - but if resources are available I'd like them both to run concurrently).
I can't find an explicit way to specify ordering of logically independent builds. However I've observed that the build order tends to be lexicographical - although I'm not sure if that's on the config name or ID.
I could experiment but would prefer a more definite answer, if possible.
This used to be available as a plugin, but has since been bundled into the product.
Go to the build queue and click on Configure Build Priorities
If you add a class with a high number, you can then associate that with the build you'd like to be built first
Managing Build Priorities - TeamCity documentation
Hope this helps

Scheme Script vs. Build Phase Script

After I make a build I want to copy some files into my Xcode project.
I discovered that I could do this either in
In "Build Phases" with a custom build step.
I can also execute scripts before and after the different "tasks" in the Scheme editor:
Build (This is where I could add my script)
Run
Test
Profile
Analyze
Archive
I don't completely understand the differences / possible implications between the two approaches and I am wondering when to choose either approach. Thanks for clarification.
After I make a build I want to copy some files into my Xcode project.
I assume you want to copy files to your build product, not the Xcode project.
There are several subtle differences between scheme and build phase scripts. Here are some of them:
Scheme scripts are part of the scheme, so sharing with other developers is more configurable. Build phase scripts on the other hand are part of the target and cannot be skipped simply by choosing another scheme.
Scheme scripts can run before the dependency checking. So you can use them to modify source files and get up to date results. This is not possible with build phase scripts.
The information passed to the script in environment variables differs slightly. Depending on what information you need you sometimes have to choose right kind of script.
Build phase scripts are run conditionally only if the build process succeeds until their place in the target.
Build phase scripts can be configured to only run when input files changed.
There isn't much difference between the two, however you have more control where, in the build sequence, the build phases scripts are run and so this is preferable (for example you could modify files that have already been copied by standard Xcode build phases).
I always use Build Phases scripts myself and have never used Scheme scripts. They are more visible and more manageable.

How to get Hudson to correctly build multiple modules changed by a single commit

Consider a Maven project with multiple interdependent modules: let's say, three jar modules A, B, and C, which are dependencies for a war module Z. I have a separate Hudson build for each of these modules, so that only modules that have changed are re-built.
My issue is that if I commit a changeset that changes both module A and module Z, Z may be built before A and fail, before A completes and triggers a rebuild of Z which now passes. Allowing builds to regularly fail for reasons to do with build ordering rather than "real" failures desensitizes us to real failures; we end up ignoring builds which have legitimately broken because we are used to assuming it will eventually flip back.
I have been managing this through the use of quiet periods, blocking when upstream builds are running, etc. But in practice, my build has more modules than the example I've given, many of which take a while to build and test. I also have a small horde of diligent developers making frequent commits.
This means my jar modules are constantly building, only rarely leaving a gap for my war module(s) to build. So the war doesn't build very frequently, meaning it takes a long time to find out when we've broken it, and also takes longer to identify which change broke it.
Also, the constant running of builds means that if I commit a change that touches jars A and B, the war file Z may be built once for jar A (which builds quickly), and then again for jar B (which takes longer). This makes it hard to understand the results of a given commit.
I've considered using the join plugin, but this appears to require all of the modules to build every time. Since I actually have quite a few jar modules, I really don't want to have to build them all every time, I only want to build the ones that have changed for a given commit.
Are there any better ways to handle this?
Thanks
This is always a difficult problem (and I've re-written this answer more than once!)
In terms of a technical solution, you want something that will wait for the build of several different jobs to be not running before it starts to run. If it's difficult to quantify, it's going to be difficult to put in place. I'll be very interested to see what technical solutions are suggested in this thread.
I guess you have to look at why your jobs are being run, and how often. If there's any code that requires unit testing in your WAR, could you move it out into it's own module? That way you can run only integration tests every hour/30 mins using the war and not worry about where and when the individual modules are at.
You may want to also look at what your modules contain. Do they ALL have to be modules? Can you perhaps reduce the fragmentation - it might help reduce the complexity of what you are attempting to schedule :)
I understand and applaud your efforts to get as much tested as soon as possible - but sometimes a smoke test is all you can do if there's a constant churn of code.
The approach we're now looking at is combining some Maven modules into single Hudson jobs, rather than having a one to one mapping of modules to jobs.
Specifically, if a war module's dependencies are fairly small and quick to build on their own, building them in the same job with the war ensures that all of the code from a single commit is built together, at least for that given war file.
This does result in duplication - we have multiple war files using the same jars, so the jars are essentially rebuilt for every war, rather than once only. But in practice, the jars are quick to build, and this makes the war files conceptually cleaner.
This would be less attractive if the jars took a while to build and test, since the combined jars + war job would then be quite long, giving us long feedback loops for problems within the jars. Getting the balance right is important.
So my takeaway: don't assume that one Hudson/Jenkins job per module is the best way to go, and don't be afraid to rebuild the same code in multiple jobs.

What is the best way to setup an integration testing server?

Setting up an integration server, I’m in doubt about the best approach regarding using multiple tasks to complete the build. Is the best way to set all in just one big-job or make small dependent ones?
You definitely want to break up the tasks. Here is a nice example of CruiseControl.NET configuration that has different targets (tasks) for each step. It also uses a common.build file which can be shared among projects with little customization.
http://code.google.com/p/dot-net-reference-app/source/browse/#svn/trunk
I use TeamCity with an nant build script. TeamCity makes it easy to setup the CI server part, and nant build script makes it easy to do a number of tasks as far as report generation is concerned.
Here is an article I wrote about using CI with CruiseControl.NET, it has a nant build script in the comments that can be re-used across projects:
Continuous Integration with CruiseControl
The approach I favour is the following setup (Actually assuming you are in a .NET project):
CruiseControl.NET.
NANT tasks for each individual step. Nant.Contrib for alternative CC templates.
NUnit to run unit tests.
NCover to perform code coverage.
FXCop for static analysis reports.
Subversion for source control.
CCTray or similar on all dev boxes to get notification of builds and failures etc.
On many projects you find that there are different levels of tests and activities which take place when someone does a checkin. Sometimes these can increase in time to the point where it can be a long time after a build before a dev can see if they have broken the build with a checkin.
What I do in these cases is create three builds (or maybe two):
A CI build is triggered by checkin and does a clean SVN Get, Build and runs lightweight tests. Ideally you can keep this down to minutes or less.
A more comprehensive build which could be hourly (if changes) which does the same as the CI but runs more comprehensive and time consuming tests.
An overnight build which does everything and also runs code coverage and static analysis of the assemblies and runs any deployment steps to build daily MSI packages etc.
The key thing about any CI system is that it needs to be organic and constantly being tweaked. There are some great extensions to CruiseControl.NET which log and chart build timings etc for the steps and let you do historical analysis and so allow you to continously tweak the builds to keep them snappy. It's something that managers find hard to accept that a build box will probably keep you busy for a fifth of your working time just to stop it grinding to a halt.
We use buildbot, with the build broken down into discrete steps. There is a balance to be found between having build steps be broken down with enough granularity and being a complete unit.
For example at my current position, we build the sub-pieces for each of our platforms (Mac, Linux, Windows) on their respective platforms. We then have a single step (with a few sub steps) that compiles them into the final version that will end up in the final distributions.
If something goes wrong in any of those steps it is pretty easy to diagnose.
My advice is to write the steps out on a whiteboard in as vague terms as you can and then base your steps on that. In my case that would be:
Build Plugin Pieces
Compile for Mac
Compile for PC
Compile for Linux
Make final Plugins
Run Plugin tests
Build intermediate IDE (We have to bootstrap building)
Build final IDE
Run IDE tests
I would definitely break down the jobs. Chances are you're likely to make changes in the builds, and it'll be easier to track down issues if you have smaller tasks instead of searching through one monolithic build.
You should be able to create one big job from the smaller pieces, anyways.
G'day,
As you're talking about integration testing my big (obvious) tip would be to make the test server built and configured as close as possible to the deployment environment as possible.
</thebloodyobvious> (-:
cheers,
Rob
Break your tasks up into discrete goal/operations, then use a higher-level script to tie them all together appropriately.
This makes your build process easier to understand for other people (you're documenting as you go so anyone on your team can pick it up, right?), as well as increasing the potential for re-use. It's likely you won't reuse the high-level scripts (although this could be possible if you have similar projects), but you can definitely reuse (even if it's copy/paste) the discrete operations rather easily.
Consider the example of getting the latest source from your repository. You'll want to group the tasks/operations for retrieving the code with some logging statements and reference the appropriate account information. This is the sort of thing that's very easy to reuse from one project to the next.
For my team's environment, we use NAnt since it provides a common scripting environment between dev machines (where we write/debug the scripts) and the CI server (since we just execute the same scripts in a clean environment). We use Jenkins to manage our builds, but at their core each project is just calling into the same NAnt scripts and then we manipulate the results (ie, archive the build output, flag failing tests etc).

Resources