How to compile VIs for different targets with compile flags in LabVIEW? - continuous-integration

I have five RT targets that run almost equal code. I don't want to copy the VIs around to every target. Obviously because I don't want to recopy everything when changes happen. My prefered way would be that I write one VI with some conditional disable or case structures where the desicion whether it's enabled or not should be made with a build file/script.
To achive the case switching I'd like to define string constants in a build script and the dead code elemination should remove the unused cases after compilation.
What are the right tools to achive that? And how would you combine that with CI?

There's no API today to do this from the build, but I would suggest that a conditional disable structure is what you want. There are some ideas on the LV idea exchange requesting this functionality.
Some options:
I believe you can set the condition value per-target, so you can have one target for each build and set a different value for each target. Or you could have multiple projects and have a different value for each project.
The CDS should have a target condition. I'm not sure how detailed you can make that condition, because I rarely work with targets.
While there's no proper API, you can call a pre-build VI and set the condition's value in the project/target programmatically using a tag. Haven't done this myself, but there are examples here and here.
I'm not sure how this would work with CI, as I don't do automated builds. I'm guessing once it's part of the build spec, it will simply be executed when you call the build spec.

Related

Why might it be necessary to force rebuild a program?

I am following the book "Beginning STM32" by Warren Gay (excellent so far, btw) which goes over how to get started with the Blue Pill.
A part that has confused me is, while we are putting our first program on our Blue Pill, the book advises to force rebuild the program before flashing it to the device. I use:
make clobber
make
make flash
My question is: Why is this necessary? Why not just flash the program since it is already made? My guess is that it is just to learn how to make an unmade program... but I also wonder if rebuilding before flashing to the device is best practice? The book does not say why?
You'd have to ask the author, but I would suggest it is "just in case" no more. Lack of trust that the make file specifies all possible dependencies perhaps. If the makefile were hand-built without automatically generated dependencies, that is entirely possible. Also it is easier to simply advise to rebuild than it is to explain all the situations where it might be necessary or otherwise, which will not be exhaustive.
From the author's point of view, it eliminates a number of possible build consistency errors that are beyond his control so it ensures you don't end up thinking the book is wrong, when it might be something you have done that the author has no control over.
Even with automatically generated dependencies, a project may have dependencies that the makefile or dependency generator does not catch, resource files used for code generation using custom tools for example.
For large projects developed over a long time, some seldom modified modules could well have been compiled with an older version of the tool chain, a clean build ensures everything is compiled and linked with the current tool.
make builds dependencies based on file timestamp; if you have build variants controlled by command-line macros, the make will not determine which dependencies are dependent on such a macro, so when building a different variant (switching from a "debug" to "release" build for example), it is good idea to rebuild all to ensure each module is consistent and compatible.
I would suggest that during a build/debug cycle you use incremental build for development speed as intended, and perform a rebuild for release or if changing some build configuration (such as enabling/disabling floating-point hardware for example or switching to a different processor variant.
If during debug you get results that seem to make no sense, such as breakpoints and code stepping not aligning with the source code, or a crash or behaviour that seems unrelated to some small change you may have made (perhaps that code has not even been executed), sometimes it may be down to a build inconsistency (for a variety of reasons usually with complex builds) and in such cases it is wise to at least eliminate build consistency as a cause by performing a rebuild all.
Certainly if you if you are releasing code to a third-party, such as a customer or for production of some product, you would probably want to perform a clean build just to ensure build consistency. You don't want users reporting bugs you cannot reproduce because the build they have been given is not reproducible.
Rebuilding the complete software is a good practice beacuse here you will generate all dependencies and symbol files path along with your local machine paths.
If you would need to debug any application with a debugger then probably you would need symbol file and paths where your source code is present and if you directly flash the application without rebuilding , then you might not be able to debug certain paths because you dont get to know at which path the application was compiled and you might miss the symbol related information.

How to set build tags in source file

New to go. I'm familiar with the ability to consume build tags in source files like so:
// +build linux arm,!linux
but is there any way to create/export build tags in a source file? Something like:
// +build +custom_tag_name
I'm trying to do what the -tags argument does inside of a source file instead of adding it to a makefile so that when a library is added to a project, it will "set" certain tags that can be used in other files.
You can't do that. Source files can only set build constraints on themselves, they can't satisfy constraints. Constraints can only be satisfied as noted - implicitly by the environment, or explicitly via the -tags flag. Build constraints are a way to achieve environment-sensitive conditional compilation. Using one source file to control the build of another doesn't really make sense; you know at build time whether file A is in the build, so you know whether file B should be in the build. This seems like an XY Problem, possibly better solved by a mechanism similar to that of the SQL drivers (registering a handler in an init function) or something like that?

When should I check "Parallelize Build" for an Xcode scheme?

I see that this option is unchecked in my current scheme and that a few places around the web recommend against it in certain cases. Can someone give a more thorough method of determining when this option can be checked for a scheme?
We don't know the internals of the "Parallelize Build" setting, but we can deduct why the setting might not be beneficial sometimes.
First it's good to understand what "Parallelize Build" does. Source:
This option allows Xcode to speed up total build time by building
targets that do not depend on each other at the same time. This is a
time-saver on projects with many smaller dependencies that can easily
be run in parallel.
When you have many targets that inter-depend on other targets this option can produce problems.
For example, imagine that one target is a framework, that your application target depends on. If you made modifications to the framework target, then there are cases where you MUST build the framework target BEFORE the application target. Parallelizing these won't work because for the application target and framework target to work nice together, they must be "in sync." We can't build the application target, without compiling the changes in the framework target first.
The above is a simple example, one that Xcode might handle nicely already, but some projects get very complex and without feeding proper information of your target-dependencies to Xcode, it might not be able to correctly parallelize your targets.
In summary, the setting is likely beneficial, and can reduce build speeds If you enable the setting and don't see any problems with the code being out-of-sync across targets. Otherwise, turn it off. As with all performance settings, make sure to test and measure whether you actually are seeing build speed increases.

CMake - Build custom build paths for different configurations

I have a project whose make process generates different build artifacts for each configuration, e.g. if initiated with make a=a0 b=b0 it would build object files into builds/a0.b0, generate a binary myproject.a0.b0, and finally update an ambiguous executable link to point to the most recently built project ln -s myproject.a0.b0 myproject. For this project, this is a useful feature mainly because:
It separates the object files for different configurations (so when I rebuild in another configuration I don't have to recompile every single source with new defines and configurations set, (unfortunately) a very common procedure).
It retains the binaries for each configuration (so it's not required to rebuild to use a different configuration if it has already been built).
It makes a copy (or link) to the last built binary which is useful in testing.
Right now this is implemented in an ugly decades-old non-portable Makefile. I'd like to reproduce the same behavior in CMake to allow easier building on other platforms but I have not been able to reproduce it in any reasonable manner. It seems like adding targets with add_library/executable, but doing this for each possible permutation of input parameters seems wrong. And I'm not sure I could get the utilization correct, allowing make, make a=a0, make b=b0 a=a0 as opposed to what specifying a cmake target would require: make myproject-a0.b0.
Is this possible to do in cmake? Namely:
Specify the build directory based on input parameters.
Accept the parameters as make arguments which can be left out (defaulted) to select the appropriate target at the level of the makefile (so it's not required to rerun cmake for a new configuration).

Method to make IncludeDirs available to external tool

I'm currently trying to make splint available as an external tool in Visual Studio 2010.
It has problems with finding all includes for the file, since it seems that the INCLUDE variable is only set at build time and I haven't found any other possibility to extract the include files in any way.
My question: Would there be any way to extract the IncludeDir field from the current file's project's Properties page, ideally with the VC++'s AdditionalIncludeDirectories?
Note also that AdditionalIncludeDirectories is per file, as it can be changed for individual source files as well as on the project level, and if it contains macros it can evaluate differently for each source file too!
I'm not familiar with driving the MSBuild objects via the API, but that's used by the IDE. Whether that way or by simply running MSBuild.exe, you need to get it to figure out all the properties, conditions, etc. and then tell you the result. If everything is well behaved, you could create a target that also uses the ClCompile item array and emits the %(AdditionalIncludeDirectories) metadata somehow such as writing it to a file or passing it to your other tool somehow. That's what's used to generate the /I parameters to CL, and you can get the same values.
If things are not well behaved in that necessary values are changed during the detailed build process, you would need to get the same prelims done just like the ClCompile target normally does, too. Or just override ClCompile with your own (last definition of a target is used) so it certainly is in the same context.
Either way, there are places where build script files can be automatically included into all projects, so you can add your stuff there or use a command argument (I think) to MSBuild to add another Include.
—John

Resources