I have a Keil STM32 project for a STM32L0. I sometimes (more often than I want) have to change the include paths or global defines. This will trigger a complete recompile for all code because it needs to ‘check’ for changed behaviour because of these changes. The problem is: I didn’t necessarily change relevant parameters for the HAL and as such it isn’t needed (as far as I understand) that these files are completely recompiled. This recompilation takes up quite a bit of time because I included all the HAL drivers for my STM32L0.
Would a good course of action be to create a separate project which compiles the HAL as a single library and include that in my main project? (This would of course be done for every microcontroller separately as they have different HALs).
ps. the question is not necessarily only useful for this specific example but the example gives some scope to the question.
pps. for people who aren't familiar with the STM32 HAL. It is the standardized interface with which the program interfaces with the underlying hardware. It is supplied in .c and .h files instead of the precompiled form of the STD/STL.
update
Here is an example of the defines that need to be managed in my example project:
STM32L072xx,USE_B_BOARD,USE_HAL_DRIVER, REGION_EU868,DEBUG,TRACE
Only STM32L072xx, and DEBUG are useful for configuring the HAL library and thus there shouldn't be a need for me to recompile the HAL when I change TRACE from defined to undefined. Therefore it seems to me that the HAL could be managed separately.
edit
Seeing as a close vote has been cast: I've read the don't ask section and my question seeks to constructively add to the knowledge of building STM32 programs and find a best practise on how to more effectively use the HAL libraries. I haven't found any questions on SO about building the HAL as a static library and therefore this question at least qualifies as unique. This question is also meant to invite a rich answer which elaborates on the pros/cons of building the HAL as a separate static library.
The answer here is.. it depends. As already pointed out in the comments, it depends on how you're planning to manage your projects. To answer your question in an unbiased way:
Option #1 - having HAL sources directly in your project means rebuilding HAL every time anything in its (and underlying) headers changes, which you've already noticed. Downside of it is longer build times. Upside - you are sure that what you build is what you get.
Option #2 - having HAL as a precompiled static library. Upside - shorter build times, downside - you can no longer be absolutely certain that the HAL library you include actually works as you want it to. In particular, you'd need to make sure in some way that all the #defines are exactly the same as when the library has been built. This includes project-wide definitions (DEBUG, STM32L072xx etc.), as well as anything in HAL config files (stm32l0xx_hal_conf.h).
Seeing how you're a Keil user - maybe it's just a matter of enabling multi-core build? See this link: http://www.keil.com/support/man/docs/armcc/armcc_chr1359124201769.htm. HAL library isn't so large that build times should be a concern when it comes to rebuilding its source files.
If I was to express my opinion and experience - personally I wouldn't do it, as it may lead to lower reliability or side effects that will be very hard to diagnose and will only get worse as you add more source files and more libraries like this. Not to mention adding more people to work on the project and explaining to them how they "need to remember to rebuild X library when they change given set of header files or project-wide definitions".
In fact, we've ran into the same dilemma for the code base I work on - it spans over 10k source and header files in total, some of which are configuration-specific and many of which are shared. It's highly modular which allows us to quickly create something new (both hardware- and software-wise) just by configuring existing code, mainly through a set of header files. However because this configuration is done through headers, making a change in them usually means rebuilding a large portion of the project. Even though build times get annoying sometimes, we opted against making static libraries for the reasons mentioned above. To me personally it's better to prioritize reliability, as in "I know what I build".
If I was to give any general tips that help to avoid rebuilds as your project gets large:
Avoid global headers holding all configuration. It's usually tempting to shove all configuration in one place, create pretty comments and sections for each software module in this one file. It's easier to manage this way (until this file becomes too big), but because this file is so common, it means that any change made to it will cause a full rebuild. Split such files to separate headers corresponding to each module in your project.
Include header files only where you need them. I sometimes see an approach where there are header files created that only "bundle" other header files and such header file is later included. In this case, making a change to any of those "smaller" headers will have an effect of having to recompile all source files including the larger file. If such file didn't exist, then only sources explicitly including that one small header would have to be recompiled. Obviously there's a line to be drawn here - including too "low level" headers may not be the greatest idea either, e.g. they may not be meant to be included as being internal library files which may change any time.
Prioritize including headers in source files over header files. If you have a pair of your own *.c (*.cpp) and *.h files - let's say temp_logger.c/.h and you need ADC - then unless you really need some ADC definition in your header (which you likely won't), then include the ADC header file in your temp_logger.c file. Later on, all files making use of the temp_logger functions won't have to be re-compiled in case HAL gets rebuilt again.
My opinion is yes, build the HAL into a library. The benefit of faster build time outweighs the risk of the library getting out of date. After some point early in the project it's unusual for me to change something that would affect the HAL. But the faster build time pays off many times.
I create a multi-project workspace with one project for the HAL library, another project for the bootloader, and a third project for the application. When I'm developing, I only rebuild the application project. When I make a release build, I select Project->Batch Build and rebuild all three projects. This way the release builds always use all the latest code and build settings.
Also, on the Options for Target dialog, Output tab, unchecking Browse Information will greatly reduce the build time.
Related
I am following the book "Beginning STM32" by Warren Gay (excellent so far, btw) which goes over how to get started with the Blue Pill.
A part that has confused me is, while we are putting our first program on our Blue Pill, the book advises to force rebuild the program before flashing it to the device. I use:
make clobber
make
make flash
My question is: Why is this necessary? Why not just flash the program since it is already made? My guess is that it is just to learn how to make an unmade program... but I also wonder if rebuilding before flashing to the device is best practice? The book does not say why?
You'd have to ask the author, but I would suggest it is "just in case" no more. Lack of trust that the make file specifies all possible dependencies perhaps. If the makefile were hand-built without automatically generated dependencies, that is entirely possible. Also it is easier to simply advise to rebuild than it is to explain all the situations where it might be necessary or otherwise, which will not be exhaustive.
From the author's point of view, it eliminates a number of possible build consistency errors that are beyond his control so it ensures you don't end up thinking the book is wrong, when it might be something you have done that the author has no control over.
Even with automatically generated dependencies, a project may have dependencies that the makefile or dependency generator does not catch, resource files used for code generation using custom tools for example.
For large projects developed over a long time, some seldom modified modules could well have been compiled with an older version of the tool chain, a clean build ensures everything is compiled and linked with the current tool.
make builds dependencies based on file timestamp; if you have build variants controlled by command-line macros, the make will not determine which dependencies are dependent on such a macro, so when building a different variant (switching from a "debug" to "release" build for example), it is good idea to rebuild all to ensure each module is consistent and compatible.
I would suggest that during a build/debug cycle you use incremental build for development speed as intended, and perform a rebuild for release or if changing some build configuration (such as enabling/disabling floating-point hardware for example or switching to a different processor variant.
If during debug you get results that seem to make no sense, such as breakpoints and code stepping not aligning with the source code, or a crash or behaviour that seems unrelated to some small change you may have made (perhaps that code has not even been executed), sometimes it may be down to a build inconsistency (for a variety of reasons usually with complex builds) and in such cases it is wise to at least eliminate build consistency as a cause by performing a rebuild all.
Certainly if you if you are releasing code to a third-party, such as a customer or for production of some product, you would probably want to perform a clean build just to ensure build consistency. You don't want users reporting bugs you cannot reproduce because the build they have been given is not reproducible.
Rebuilding the complete software is a good practice beacuse here you will generate all dependencies and symbol files path along with your local machine paths.
If you would need to debug any application with a debugger then probably you would need symbol file and paths where your source code is present and if you directly flash the application without rebuilding , then you might not be able to debug certain paths because you dont get to know at which path the application was compiled and you might miss the symbol related information.
I am developing an OS for embedded devices that runs bytecode. Basically, a micro JVM.
In the process of doing so, I am able to compile and run Java applications to bytecode(ish) and flash that on, for instance, an Atmega1284P.
Now I've added support for C applications: I compile and process it using several tools and with some manual editing I eventually get bytecode that runs on my OS.
The process is very cumbersome and heavy and I would like to automate it.
Currently, I am using makefiles for automatic compilation and flashing of the Java applications & OS to devices.
All steps, roughly, for a C application are as follows and consist of consecutive manual steps:
(1) Use Docker to run a Linux container with lljvm that compiles a .c file to a .class file (see also https://github.com/davidar/lljvm/tree/master)
(2) convert this c.class file to a jasmin file (https://github.com/davidar/jasmin) using the ClassFileAnalyzer tool (http://classfileanalyzer.javaseiten.de/)
(3) manually edit this jasmin file in a text editor by replacing/adjusting some strings
(4) convert the modified jasmin file to a .class file again using jasmin
(5) put this .class file in a folder where the rest of my makefiles (the ones that already make and deploy the OS and class files from Java apps) can take over.
Current options seem to be just keep using makefiles but this is a bit unwieldly (I already have 5 different makefiles and this would further extend that chain). I've also read a bit about scons. In essence, I'm wondering which are some recommended tools or a good approach for complicated builds.
Hopefully this may help a bit, but the question as such could probably be a subject for a heated discussion without much helpful results.
As pointed out in the comments by others, you really need to automate the steps starting with your .c file to the point you can integrated it with the rest of your system.
There is generally nothing wrong with make and you would not win too much by switching to SCons. You'd get more ways to express what you want to do. Among other things meaning that if you wanted to write that automation directly inside the build system and its rules, you could also use Python and not only shell (should that be of a concern though, you could just as well call that Python code from make). But the essence of target, prerequisite, recipe is still there. And with that need for writing necessary automation for those .c to integration steps.
If you really wanted to look into alternative options. bazel might be of interest to you. The downside being the initial effort to write the necessary rules to fit your needs could be costly. And depending on size of your project, might just be too much. On the other hand once done with that, it'd be very easy to use (apply those rules on growing code base) and you could also ditch the container and rely on its more lightweight sand-boxing and external rules to get the tools and bits you need for your build... all with a single system for build description.
I have an iPad app that can come in several different releasable flavors, and I'm trying to decide how to best make these alternate releases. They have only a few differences source-code wise, and primarily differ in resource data files (xml and a very large amount of binary files).
1) Should I duplicate the project and branch the handful of source files and include the appropriate resources separately for each? This seems more of a maintenance hassle as I add files to the project and do other basic things other than edit shared files.
2) Or should I use #defines to build the appropriate flavor I want at any time, then ifdef out entire files accordingly? This seems simpler but my suspicion is that I won't be able to find an easy way to exclude/include resource files, and that would be a deal breaker.
Any suggestions on how to deal with the resource issue in option 2, or if there is an alternate approach altogether that is better?
What about creating separate targets within a single XCode project?
Make each target include the files that are appropriate for that app; no need for ifdefs that way.
As I create more applications, my /code/shared/* increases.
this creates a problem: zipping and sending a project is no longer trivial. it looks like my options are:
in Xcode set shared files to use absolute path. Then every time I zip and send, I must also zip and send /code/shared/* and give instructions, and hope the recipient doesn't have anything already at that location.
this is really not practical; it makes the zip file too big
maintain a separate copy of my library files for each project
this is not really acceptable as a modification/improvements would have to be implemented everywhere separately. this makes maintenance unreasonably cumbersome.
some utility to go through every file in the Xcode project, figure out the lowest common folder, and create a zipped file structure that only contains the necessary files, but in their correct relative folder locations, so that the code will still build
(3) is what I'm looking for, but I have a feeling it doesn't as yet exist.
Anyone?
You should rethink your current process. The workflow you're describing in (3) is not normal. This all sounds very complicated and all basically handled with relative ease if you were using source control. (3) just doesn't exist and likely never will.
A properly configured SCM will allow you to manage multiple versions of multiple libraries (packages) and allow you to share projects (in branches) without ever requiring zipping up anything.
Whenever we recompile an exe or a DLL, its binary image is different even if the source code is the same, due to various timestamps and checksums in the image.
But, our quality system implies that each time a new DLL is published, related validation tests must be performed again (often manually, and it takes a significant amount of time.)
So, our goal is to avoid releasing DLLs that have not actually changed. I.e: having an automatic procedure (script, tool, whatever...) that detect different Dlls based only on meaningful information they contain (code and data), ignoring timestamps and checksum.
Is there a good way to achieve this ?
Base it off the version information, and only update the version information when you actually make changes.
Have your build tool build the DLL twice. Whatever differences exist between the two are guaranteed to be the result of timestamps or checksums. Now you can use that information to compare to your next build.
If you have an automated build system that syncs source before starting a build, only proceed with building and publishing if there any actual changes in source control. You should be able to detect this easily from the output of your source control client.
We have the same problem with our build system. Unfortunately it is not trivial to detect if there are any material code changes since we have numerous static libraries, so a change to one may result in a dll/exe changing. Changes to a file directly used by the dll/exe may just be fixing a bad comment, not changing the resulting object code.
I've looked previously for a tool to do what you desired and I did not see one. My thought was to compare the two files manually and skip the non meaningful differences in the two versions. The Portable File Format is well documented, so I don't expect this to be terribly difficult. Our requirements additional require that we ignore the version stamped into the dll/exe since we uniquely stamp all our files, and also to ignore the signature as we sign all our executables.
I've yet to find time to do any of this, but I'd be interested in collaborating with you if you proceed with implementing a solution. If you do find a tool that does this, please do let us know.