GPRbuild: `runtime` attribute ignored in aggregated project - runtime

I am working on a few libraries for coding Arduinos in Ada. Each library is its own project, and I have an aggregate project that aggregates the libraries. I need to specify the runtime for each project since they are running on different chips. So for example I have something like this:
aggregate project Agg is
for Project_Files use ("due/arduino_due.gpr",
"uno/arduino_uno.gpr",
"nano/arduino_nano.gpr");
-- ...
end Agg;
library project Arduino_Due is
-- Library_Dir, _Name, and _Kind attributes ...
-- Target attribute ...
for Runtime ("Ada") use "../runtimes/arduino_due_runtime";
package Compiler is
-- Driver and Switches attributes ...
end Compiler;
And similar projects for the Uno and Nano. Building arduino_due.gpr directly works fine. It finds my runtime in the specified folder as it should. However, when I build agg.gpr, I get
fatal error, run-time library not installed correctly
cannot locate file system.ads
This occurs whether I use an absolute path or a relative path, and also occurs when the relative path is concatenated with Project'Project_Dir. However, if rather than using the Runtime attribute I use the compiler switch --RTS=..., then it works, but only if I use a relative path that is prefixed with Project'Project_Dir. An absolute path or a plain relative path will result in the error gprbuild: invalid runtime directory runtimes/arduino_due_runtime.
So what's going on here? This behavior seems inconsistent and I couldn't find anything in the docs about it so I suspect a bug. But I thought I'd ask here first in case I'm doing something wrong. Maybe I should just be using child projects, or project extension?

This isn’t a bug, it’s a feature :-).
See this rejected issue.
There are two things:
Several options are only recognised in the main project, and if you use an aggregate project that is the main project.
Package Builder is ignored in aggregated projects.
My conclusion: aggregate projects don’t suit your use case, or mine. As I said in the issue noted above, back to Makefiles (or scripts).
Part of the design intent is that aggregate projects should share code and compilations: as 2.8.4 of the manual says,
The loading of aggregate projects is optimized in GPRbuild, so that all files are searched for only once on the disk (thus reducing the number of system calls and yielding faster compilation times, especially on systems with sources on remote servers). As part of the loading, GPRbuild computes how and where a source file should be compiled, and even if it is located several times in the aggregated projects it will be compiled only once.
Since there is no ambiguity as to which switches should be used, files can be compiled in parallel (through the usual -j switch) and this can be done while maximizing the use of CPUs (compared to launching multiple GPRbuild commands in parallel).

Related

hierarchical compile order with modelsim on command line

I'm trying to compile a VHDL design with modelsim on command line. Is there any way to get an automatical compile order according to the design hierarchy?
I didn't find an option in the documentation of vcom. Only link I found is this, where the solution was to write a brute force script. But it's 10 years ago, so maybe there is anything new. It should be like the option -i of ghdl.
I'm using Altera/Intel Modelsim 18.0 on Linux.
VUnit is an open source tool that will that for you. I recommend the following reading
Installation
Compilation (what you're looking for)
Disclaimer: I'm one of the authors
It seems that in recent versions (tested in ModelSim SE-64 2020.4), vcom supports a new -autoorder parameter, which is described as follows:
Source files can be specified in any order. When all source files can
be specified on a single command line, compilation proceeds in a scan
phase, followed by a refresh phase. To perform the scan phase over
multiple compilations, inhibit the refresh phase with
-noautoorderrefresh. Then perform the refresh phase with -refresh_marked (and omit -autoorder).
Just by adding the -autoorder parameter, I was able to easily compile a large VHDL project with many dependencies, that previously failed due to wrong compile order.

Fortran dependency management and dynamic config-/documentfile merges using "maven equivalent"

I have a large project written mostly in FORTRAN90 consisiting of a core and numerous add-on modules also written in FORTRAN90. At any given time I'd like to be able to:
package the core module together with any number of the modules
create a new config-file merging the core-config and module-configs
merge the various latex-files from the core and modules
The code+configs+documentation lives in SVN...
Can/shall MAVEN be used for this use-case?
******* UPDATE *******
#haraldkl:
Ok, I'll try to elaborate as it definitely is in my interest to gather as much info as possible on this - I really appreciate the comments I get!
My project contains a core module which is mandatory. In order to add additional functionality you may select an arbitrary number of add-on modules. The core and each module resides in their own directory and is under SVN-control. For a given delivery I would like to be able to select the "core" and an arbitrary number of modules and calculate the dependency chain in order to build the modules in the correct order as they sometimes, unfortunately, might have cross-dependencies. When the build order has been set I need to be able to merge property-files from the selected modules with the property-file for the "core" so I end-up with an assembled/aggregated property-file with the aggregated properties from the "core" and all the selected modules. The same goes for the latex-files: I'd like to get an assembled document based on the "core" + the selected modules latex-files, thus ending up with one latex-file.
So, the bottom line: a solution something like:
tick selected modules to go with the delivery (core is mandatory so no need to tick)
click "Assemble" (code is gathered from SVN)
The solution calculates correct build order
The solution merges property-files -> "package.property"
The solution merges latex-files -> "document.latex"
Currently we use make under UNIX but I'm a little uncertain as to what extent make is able to handle 4 and 5 above.
DONE!
Here is my take on it:
I believe steps 1 to 3 are completely achievable with commonly used configuration tools. Also steps 4 and 5 present, as far as I can see, just another build task, there is no reason why Make should not be able to do that. I regularly generate files for LaTeX processing via Make and then assemble them with latexmk mostly. The main point is how to select what to merge and how it has to be merged, you are a little bit unclear on the how, the what should be handled by the configuring system. Your choice probably also depends on what should be done at the end of step 3. Should the code actually be compiled, or do you need to have some written out version of the dependencies?
Traditional configure system on Unix is the autotools suite. However, as far as I know, it does not support the identification of Fortran dependencies out of the box, and you would need to enhance it in that direction.
A popular replacement for the autotools is CMake, which does include Fortran dependency resolution. It might best suite your needs as pointed out by casey, as it allows you to create various generators, so for example you could have it generating an appropriate Makefile for your selection of files.
Waf gives you great deal of flexibility to handle steps 4 and 5 in your list, it is also capable to identify Fortran dependencies, but I think, it is not as straight forward to generate for example Makefiles out of it as in CMake. The flexibility here is due to the fact, that your waf scripts are just ordinary Python scripts, so you could easily utilize any Python tools in your workflow and describe steps 4 and 5 in any complicated manner you desire.
Maven can compile Fortran code, though I do not have any experience with it, I would doubt that it also gives you automatic Fortran dependency resolution. In my understanding, it is not quite as well fit for your setup as the other tools.
The Fortranwiki has some more tools, for example you could come up with your own environment building Makefiles and use makedepf90 to generate the dependencies.

What does BII_IMPLICIT_RULES_ENABLED do when switched on or off in CMakeLists.txt?

I was wondering about the BII_IMPLICIT_RULES_ENABLED flag which I had switched off in one of my CMakeLists.txt files, in order to get an OpenGL related block to compile on a Mac, following a suggestion from biicode. This setting is still there and everything works perfectly, but I would like to find out more about it. Could someone explain what it does exactly?
Thanks!
BII_IMPLICIT_RULES_ENABLED activates the addition of system libs to the target that has included certain headers. For example, if your code contains an:
#include "math.h"
And you are in *nix systems, then the library "m" (libm) will be added to your target via TARGET_LINK_LIBRARIES.
You can see the headers that are processed in your cmake/biicode.cmake file, in the HANDLE_SYSTEM_DEPS
My recommendation: Put it to False whenever possible, and handle the required system libs yourself, exactly what you have done. It is something that will be deprecated soon, or at least set to False by default to new projects. This option sometimes causes troubles, if something fails or there is a bug in biicode.cmake, e.g. in the past it tried to add libm to targets also in windows. It will be gradually deprecated and probably substituted by some CMake macros hosted (as in http://www.biicode.com/biicode/cmake) that could be used by users if they decide to, but not automatically as it is done now.

Maven flex project using source directory from seperate module with new artifactId

Finding it difficult to express myself easily around this issue so thought best to start with a context section:
Context:
I have a Flex based application (a rather complex system) that can be compiled using "conditional compilation" into various use cases eg:
Compilation One = portalProjectUserOne
Compilation two = portalProjectUserTwo
Whether using conditional compilation is a sound idea is a completly different argument and therefore lets assume one is forced down this road, I then however decide to create a project for each of my desired compilations:
portalProjectUserOne
-branches
-tags
-trunk
-src
-pom
portalProjectUserTwo
-branches
-tags
-trunk
-src
-{NEEDS TO USE PROJECT ONES SOURCE}
As I do not want to break the ever rigid laws of programming and not duplicate anything I need a way of accessing the source of project ONE and using the source to do a CUSTOM compilation.
Things I have tried:
I tried using relative paths (../../portalProjectUserOne/trunk/src/etc...) with successful compilation but when it came time to release a final product to the nexus repo it had a few issues with reaching out the project structure, that and it felt a bit dirty really.
I attempted to use the "maven-dependency-plugin" to try and copy the sources from the first project, maybe this a pure lack of understanding on my part but I can not get my head around how you generate your classes in one project and access them from another.
This is my first question on stackoverflow and if I have been far to broad please let me know and I shall update with more extensive examples if required.
Thanks for listening/reading/being a coder.

Multiple Boost.Thread Instances OK in a C++ application?

I have an application with a plug-in architecture that is using Boost.Threads as a DLL (specifically, a Mac OS X framework). I am trying to write a plug-in that uses Boost.Threads as well, and would like to link in the library statically. Everything builds fine but the application quickly crashes in my plug-in, deep within the Boost.Threads code. Linking to the DLL version of Boost.Threads seems to resolve the problem, but I'd like my plug-in to be self-contained.
Is it possible to have two instances of Boost.Threads with such a setup (one as a DLL, one statically linked in another DLL)? If so, what might I be missing to make the two instances get along?
Once my team faced a similar problem. For reasons I will not mention at this time, we had to develop a system that used 2 different versions of Boost (threads, system, filesystem).
The idea we came up with and executed was to grab the source code of both versions of Boost we needed, and then tweak one of them to change the symbols and function names to avoid name clashing.
In other words, we replaced all references to the name boost for bubbles inside the sources (or some other name) and also made changes to the compilation so it would build libbubbles instead of libboost.
This procedure gave us 2 sets of libraries, each with having their own binaries and header files.
If you looked at the source code of our application you would see something like:
#include <bubbles/thread.hpp>
#include <boost/thread.hpp>
bubbles::thread* thread_1;
boost::thread* thread_2;
I imagine some of the guys here already faced a similar situation. There are probably better alternatives to the one I suggested above.

Resources