When should I check "Parallelize Build" for an Xcode scheme? - xcode

I see that this option is unchecked in my current scheme and that a few places around the web recommend against it in certain cases. Can someone give a more thorough method of determining when this option can be checked for a scheme?

We don't know the internals of the "Parallelize Build" setting, but we can deduct why the setting might not be beneficial sometimes.
First it's good to understand what "Parallelize Build" does. Source:
This option allows Xcode to speed up total build time by building
targets that do not depend on each other at the same time. This is a
time-saver on projects with many smaller dependencies that can easily
be run in parallel.
When you have many targets that inter-depend on other targets this option can produce problems.
For example, imagine that one target is a framework, that your application target depends on. If you made modifications to the framework target, then there are cases where you MUST build the framework target BEFORE the application target. Parallelizing these won't work because for the application target and framework target to work nice together, they must be "in sync." We can't build the application target, without compiling the changes in the framework target first.
The above is a simple example, one that Xcode might handle nicely already, but some projects get very complex and without feeding proper information of your target-dependencies to Xcode, it might not be able to correctly parallelize your targets.
In summary, the setting is likely beneficial, and can reduce build speeds If you enable the setting and don't see any problems with the code being out-of-sync across targets. Otherwise, turn it off. As with all performance settings, make sure to test and measure whether you actually are seeing build speed increases.

Related

Why might it be necessary to force rebuild a program?

I am following the book "Beginning STM32" by Warren Gay (excellent so far, btw) which goes over how to get started with the Blue Pill.
A part that has confused me is, while we are putting our first program on our Blue Pill, the book advises to force rebuild the program before flashing it to the device. I use:
make clobber
make
make flash
My question is: Why is this necessary? Why not just flash the program since it is already made? My guess is that it is just to learn how to make an unmade program... but I also wonder if rebuilding before flashing to the device is best practice? The book does not say why?
You'd have to ask the author, but I would suggest it is "just in case" no more. Lack of trust that the make file specifies all possible dependencies perhaps. If the makefile were hand-built without automatically generated dependencies, that is entirely possible. Also it is easier to simply advise to rebuild than it is to explain all the situations where it might be necessary or otherwise, which will not be exhaustive.
From the author's point of view, it eliminates a number of possible build consistency errors that are beyond his control so it ensures you don't end up thinking the book is wrong, when it might be something you have done that the author has no control over.
Even with automatically generated dependencies, a project may have dependencies that the makefile or dependency generator does not catch, resource files used for code generation using custom tools for example.
For large projects developed over a long time, some seldom modified modules could well have been compiled with an older version of the tool chain, a clean build ensures everything is compiled and linked with the current tool.
make builds dependencies based on file timestamp; if you have build variants controlled by command-line macros, the make will not determine which dependencies are dependent on such a macro, so when building a different variant (switching from a "debug" to "release" build for example), it is good idea to rebuild all to ensure each module is consistent and compatible.
I would suggest that during a build/debug cycle you use incremental build for development speed as intended, and perform a rebuild for release or if changing some build configuration (such as enabling/disabling floating-point hardware for example or switching to a different processor variant.
If during debug you get results that seem to make no sense, such as breakpoints and code stepping not aligning with the source code, or a crash or behaviour that seems unrelated to some small change you may have made (perhaps that code has not even been executed), sometimes it may be down to a build inconsistency (for a variety of reasons usually with complex builds) and in such cases it is wise to at least eliminate build consistency as a cause by performing a rebuild all.
Certainly if you if you are releasing code to a third-party, such as a customer or for production of some product, you would probably want to perform a clean build just to ensure build consistency. You don't want users reporting bugs you cannot reproduce because the build they have been given is not reproducible.
Rebuilding the complete software is a good practice beacuse here you will generate all dependencies and symbol files path along with your local machine paths.
If you would need to debug any application with a debugger then probably you would need symbol file and paths where your source code is present and if you directly flash the application without rebuilding , then you might not be able to debug certain paths because you dont get to know at which path the application was compiled and you might miss the symbol related information.

Avoiding build twice when using a shared project together with build generated code

I have a visual studio solution with multiple projects. One generates code files as part of pre-build (grpc classes via Grpc.Tools). There is also a shared project that extends the partial classes built as part of that pre-build.
However, sometimes for one reason or another - like compiling the client half of this (client uses the shared project to extend its own classes), compilation will error because the shared project can't find the generated classes yet. Presumably they don't exist. It's fixed easily by compiling the project twice.
Is there something I can do in this scenario? Is it possible to somehow move validating/compiling the shared project "further down" the compilation pipeline? Or even just set that particular project to try and compile twice if there's an error? Or is this the kind of thing that realistically I should just live with given what I'm doing - I haven't found any other references to this problem. It's not that big of an issue and it wouldn't happen very often, but I'd like to handle it reasonably if I can.
Edit
If I wasn't clear, this is a shared project, as in a .shproj, a project that is not compiled separately. The project that references it includes it and builds it all together as one.
If project B depends on project A, then project A must be built before project B. Visual Studio is smart enough to figure out the build order this way. Incidentally, this is also one of the reasons (among many) why circular dependencies simply cannot work.
I suspect that your projects are currently not linked via a dependency, as this issue wouldn't occur if there were such a link. Perhaps your second project is accessing the first project's files via the file system? That's just a guess though.
You can use this "A before B which depends on A" behavior of the build process to your advantage. Have project B (i.e. the project you need to go second) add project A (i.e. the project you need to go first) as a dependency. This forces VS to build them in the appropriate order.
Some remarks:
I am unsure if VS is able to omit dependencies that you add but not actually depend on (i.e. you never reference its content). I can't find any confirmation on this point (but absence of proof is not proof of absence!) But even if that is the case, that could be easily worked around by having a dummy class in B which actually references and uses something from project A.
Keep in mind that during a regular build, VS does not rebuild projects that have not changed since the last build. If this is an issue for you (unsure if it is, you didn't add enough context), make sure to always rebuild or clean to make sure that a new build will be triggered.
However, sometimes for one reason or another - like compiling the client half of this (client uses the shared project to extend its own classes), compilation will error because the shared project can't find the generated classes yet. Presumably they don't exist. It's fixed easily by compiling the project twice.
That it is only sometimes and can be fixed by "trying again" points at one thing - you got race condition. But a race condition during compilation, is not a thing I heard off or encountered before.
I got a few possible cultripts. But in the end, race conditions are notoriously hard to debug:
- Maybe the compiler that deals with the shared project returns before it is finished - wich should be impossible - or
- Maybe something causes the main projects to compile before the shared projects files are ready.
- Maybe a 3rd party tool - like a Virus scanner or auto backup maker inteferes?
- Maybe the shared projects compiled files are hosted on a network drive, and there sometimes is just the slightest delay between "compiled" and "visible to all other computers in the network"?
Usually the proper ways for dependant compilation should deal with such issues. That indicates that what you got there, is propably not the most stable setup.

Should the STM32 HAL be included as a precompiled library

I have a Keil STM32 project for a STM32L0. I sometimes (more often than I want) have to change the include paths or global defines. This will trigger a complete recompile for all code because it needs to ‘check’ for changed behaviour because of these changes. The problem is: I didn’t necessarily change relevant parameters for the HAL and as such it isn’t needed (as far as I understand) that these files are completely recompiled. This recompilation takes up quite a bit of time because I included all the HAL drivers for my STM32L0.
Would a good course of action be to create a separate project which compiles the HAL as a single library and include that in my main project? (This would of course be done for every microcontroller separately as they have different HALs).
ps. the question is not necessarily only useful for this specific example but the example gives some scope to the question.
pps. for people who aren't familiar with the STM32 HAL. It is the standardized interface with which the program interfaces with the underlying hardware. It is supplied in .c and .h files instead of the precompiled form of the STD/STL.
update
Here is an example of the defines that need to be managed in my example project:
STM32L072xx,USE_B_BOARD,USE_HAL_DRIVER, REGION_EU868,DEBUG,TRACE
Only STM32L072xx, and DEBUG are useful for configuring the HAL library and thus there shouldn't be a need for me to recompile the HAL when I change TRACE from defined to undefined. Therefore it seems to me that the HAL could be managed separately.
edit
Seeing as a close vote has been cast: I've read the don't ask section and my question seeks to constructively add to the knowledge of building STM32 programs and find a best practise on how to more effectively use the HAL libraries. I haven't found any questions on SO about building the HAL as a static library and therefore this question at least qualifies as unique. This question is also meant to invite a rich answer which elaborates on the pros/cons of building the HAL as a separate static library.
The answer here is.. it depends. As already pointed out in the comments, it depends on how you're planning to manage your projects. To answer your question in an unbiased way:
Option #1 - having HAL sources directly in your project means rebuilding HAL every time anything in its (and underlying) headers changes, which you've already noticed. Downside of it is longer build times. Upside - you are sure that what you build is what you get.
Option #2 - having HAL as a precompiled static library. Upside - shorter build times, downside - you can no longer be absolutely certain that the HAL library you include actually works as you want it to. In particular, you'd need to make sure in some way that all the #defines are exactly the same as when the library has been built. This includes project-wide definitions (DEBUG, STM32L072xx etc.), as well as anything in HAL config files (stm32l0xx_hal_conf.h).
Seeing how you're a Keil user - maybe it's just a matter of enabling multi-core build? See this link: http://www.keil.com/support/man/docs/armcc/armcc_chr1359124201769.htm. HAL library isn't so large that build times should be a concern when it comes to rebuilding its source files.
If I was to express my opinion and experience - personally I wouldn't do it, as it may lead to lower reliability or side effects that will be very hard to diagnose and will only get worse as you add more source files and more libraries like this. Not to mention adding more people to work on the project and explaining to them how they "need to remember to rebuild X library when they change given set of header files or project-wide definitions".
In fact, we've ran into the same dilemma for the code base I work on - it spans over 10k source and header files in total, some of which are configuration-specific and many of which are shared. It's highly modular which allows us to quickly create something new (both hardware- and software-wise) just by configuring existing code, mainly through a set of header files. However because this configuration is done through headers, making a change in them usually means rebuilding a large portion of the project. Even though build times get annoying sometimes, we opted against making static libraries for the reasons mentioned above. To me personally it's better to prioritize reliability, as in "I know what I build".
If I was to give any general tips that help to avoid rebuilds as your project gets large:
Avoid global headers holding all configuration. It's usually tempting to shove all configuration in one place, create pretty comments and sections for each software module in this one file. It's easier to manage this way (until this file becomes too big), but because this file is so common, it means that any change made to it will cause a full rebuild. Split such files to separate headers corresponding to each module in your project.
Include header files only where you need them. I sometimes see an approach where there are header files created that only "bundle" other header files and such header file is later included. In this case, making a change to any of those "smaller" headers will have an effect of having to recompile all source files including the larger file. If such file didn't exist, then only sources explicitly including that one small header would have to be recompiled. Obviously there's a line to be drawn here - including too "low level" headers may not be the greatest idea either, e.g. they may not be meant to be included as being internal library files which may change any time.
Prioritize including headers in source files over header files. If you have a pair of your own *.c (*.cpp) and *.h files - let's say temp_logger.c/.h and you need ADC - then unless you really need some ADC definition in your header (which you likely won't), then include the ADC header file in your temp_logger.c file. Later on, all files making use of the temp_logger functions won't have to be re-compiled in case HAL gets rebuilt again.
My opinion is yes, build the HAL into a library. The benefit of faster build time outweighs the risk of the library getting out of date. After some point early in the project it's unusual for me to change something that would affect the HAL. But the faster build time pays off many times.
I create a multi-project workspace with one project for the HAL library, another project for the bootloader, and a third project for the application. When I'm developing, I only rebuild the application project. When I make a release build, I select Project->Batch Build and rebuild all three projects. This way the release builds always use all the latest code and build settings.
Also, on the Options for Target dialog, Output tab, unchecking Browse Information will greatly reduce the build time.

How to compile VIs for different targets with compile flags in LabVIEW?

I have five RT targets that run almost equal code. I don't want to copy the VIs around to every target. Obviously because I don't want to recopy everything when changes happen. My prefered way would be that I write one VI with some conditional disable or case structures where the desicion whether it's enabled or not should be made with a build file/script.
To achive the case switching I'd like to define string constants in a build script and the dead code elemination should remove the unused cases after compilation.
What are the right tools to achive that? And how would you combine that with CI?
There's no API today to do this from the build, but I would suggest that a conditional disable structure is what you want. There are some ideas on the LV idea exchange requesting this functionality.
Some options:
I believe you can set the condition value per-target, so you can have one target for each build and set a different value for each target. Or you could have multiple projects and have a different value for each project.
The CDS should have a target condition. I'm not sure how detailed you can make that condition, because I rarely work with targets.
While there's no proper API, you can call a pre-build VI and set the condition's value in the project/target programmatically using a tag. Haven't done this myself, but there are examples here and here.
I'm not sure how this would work with CI, as I don't do automated builds. I'm guessing once it's part of the build spec, it will simply be executed when you call the build spec.

Can GNU make create broken binaries when building in parallel?

I am working in a project where we have just added parallelism to our build system, using GNU Make.
We build both libraries and the programs in parallel.
First we build all the libs necessary for the binaries. After the libs are created we start building the binaries.
Now when running our programs we have found that one of the binaries dont run as expected. Is it possible that GNU Make could cause broken binaries when building in parallel but still link correctly? If that is the case, what is the common cause and how can one avoid it?
Correct parallel builds depend on a correct makefile. If a build works serially but not in parallel, that means that your makefile has not declared all the prerequisites that it needs, so make doesn't realize it can't build target X until after target Y is complete.
However, it's extremely unlikely that these kinds of errors would allow the build to succeed: that is, the compiler or linker will almost always fail if things are building in the wrong order. It's hard for me to imagine how the build would succeed except by the purest chance, if at all (maybe if your tools overwrite an existing file instead of deleting it and writing it from scratch). Of course you given no information about exactly what "don't run as expected" means so it's hard to say for sure.
To investigate you need to do some testing: does it fail the same way every time you do a parallel build? Does it fail even if you use different amounts of parallelism (different -j levels)? Does it continue to fail if you switch back to non-parallel builds? Does the build succeed with -j even if you start with a completely clean workspace (nothing built)?

Resources