When should #include be used over #include-once? - include

According to AutoIt's online reference:
It is quite common to have the same "#include " line in several of the files included included in a script. If the same file were to be included several times, it is quite likely that this would generate a "Duplicate function" or "Cannot redeclare a Const" error. So when writing a script intended for use as an include file, add #include-once to prevent that file from being included more than once. Note that the #include-once line must be placed at the top of the script and before any other #include lines.
It is not recommended to add a #include-once line to scripts which are not intended to be used as include files within other scripts.
So #include-once should not be used in scripts that are not intended to be included in other scripts. Why?
If the benefit of using #include-once is to prevent errors triggered by duplicate #include, then what is the benefit(s) of using #include over #include-once?

"… what is the benefit(s) of using #include over #include-once?"
None; they serve different purposes.
#include <filename.au3> includes filename.au3. #include-once (no parameters) prevents a file containing that directive from being included more than once.
If two different include files contain #include <FileConstants.au3> and FileConstants.au3 contains #include-once, then FileConstants.au3 does not get added again on inclusion of the second file (preventing constant- and function re-declaration errors). Usually every to be separately included file starts with #include-once.
"So #include-once should not be used in scripts that are not intended to be included in other scripts. Why?"
It serves no purpose. Effects (if any) classify as undocumented behavior (unintended by developers and subject to unannounced changes).
"When should #include be used over #include-once?"
You could use #include to execute code from another file at a certain (or multiple) other location(s) from within a file.

Related

ESP32 compiler giving "multiple definition of" errors

Got a new issue I've not come across before that's appeared when using the Espressif ESP32 ESP-IDF standard setup under VSCode. It uses the GNU compiler.
I'm getting "multiple definition of" errors on variables that share the same name, but which should be local.
So I use a .c and .h pair of files approach.
In my .c files I do this at the top
#define IO_EXPANDER_C //<<<This is a unique define for this file pair
#include "io-pca9539.h"
In my .h files I do this:
#ifdef IO_EXPANDER_C
//----- INTERNAL ONLY MEMORY DEFINITIONS -----
uint8_t *NextReadDataPointer;
//----- INTERNAL & EXTERNAL MEMORY DEFINITIONS -----
//(Also defined below as extern)
int SomeVariableIWantAvailableGlobally;
#else
//----- EXTERNAL MEMORY DEFINITIONS -----
extern int SomeVariableIWantAvailableGlobally;
#endif
It's a great simple system, any other .c file that includes the .h file (without the #define above its include statemnt), gets all of its extern variables, none of its local variables.
But, compiling in VSCode with my ESP-IDF based project, I'm getting "multiple definition of" errors relating to "NextReadDataPointer"
I use the same variable name NextReadDataPointer in another file pair in just the same way, but it's never declared anywhere as extern and each file pair uses a separate #define (IO_EXPANDER_C and LED_C). I do this all the time normally and I can't see any obvious mistakes.
I've never seen a C compiler do this before, it's as if it's mixing up the local definitions somehow. A #define should only have scope in the file it is declared in and in any includes within that file.
Even odder, the error is not generated if the project is built but a function is called from just one of the file pairs that share the same local variable name. It's only generated when functions are called from both file pairs from my main application.
Can anyone shed light on whether the GNU C compiler does something funky for a standard ESP-IDF project as it's got me baffled?
uint8_t *NextReadDataPointer; creates a variable which is visible across all translation units, i.e. it's the opposite of "private". If you include this header in multiple c files and the linker tries to link those together; it'll see a conflict. The keyword you're looking for is static, for example static uint8_t *NextReadDataPointer; creates a variable that is not visible across translation units. The reason you don't see the problem if calling a function from only one of those two files is because in this case the linker doesn't bother looking into the other one.
Personally I'd avoid such clever preprocessor hacks because it's quite difficult to see how files include one another and debug the resulting problems. I'd suggest sticking to the standard way of declaring shared things in header files and keeping the private stuff inside the c file (prepended by static).

Freemarker: difference between include and import?

I am trying to create two templates and use the variables of one .ftl (freemarker) file in another.
I don't really understand why I should use include vs import.
#include is very much like if you copy-paste the content of the included file into the place of the #include tag. #import also processes the target file, but doesn't output anything. Instead, it assigns the set of variables (the namespace) created by the imported template to the variables after the as keyword. As #macro-s and #function-s just create variables, #import is practical for pulling in a collection of utility macros and functions. Also note that #import-ing the same file for the second time does nothing (as the namespace is only populated once), while calling #include twice will process the target file twice.
As for JavaScript, FreeMarker runs on the server side, and the JavaScript runs in the browser. So the browser only ever sees the final output from FreeMarker.

F# interactive - how to use precompiler directives when multiple files reference the same assembly?

So, in Project AB I have FileA.fs and FileB.fs. FileB uses definitions from FileA, and both FileA and FileB use definitions from Project C (written in C#).
In FileA.FS, I have:
#if COMPILED
namespace ProjectAB
#else
#I "bin\debug"
#r "ProjectZ.dll"
#endif
...which works how it's supposed to -- I can run the whole file in F#-Interactive and it's great.
In FileB.fs, my header is:
#if COMPILED
module ProjectAB.ModuleB
#else
#load "FileA.fs"
#I "bin\debug"
#r "ProjectZ.dll"
#endif
But when I run this (from FileB), I get the error:
FileA.fs(6,1): error FS0222: Files in libraries or multiple-file applications must begin with a namespace or module declaration, e.g. 'namespace SomeNamespace.SubNamespace' or 'module SomeNamespace.SomeModule'. Only the last source file of an application may omit such a declaration.
According to the fsi.exe reference, the #load directive "Reads a source file, compiles it, and runs it". But it seems like it must be doing so without the COMPILED directive defined, because it doesn't see the "namespace ProjectAB" declaration.
How can I set up my headers so that I can run either file in F#-interactive?
Edit Per latkin's answer below, I created a script as the last file in the project, _TestScript.fsx. I removed all the precompiler stuff from the other files and set this as the header of the .fsx file:
#if INTERACTIVE
#I "bin\debug"
#r "ProjectZ.dll"
#load "FileA.fs"
#load "FileB.fs"
#endif
When I run this in the interactive, it correctly loads ProjectZ, FileA, and FileB for me to access in the interactive window.
However, in _TestScript.fsx, I get squiggly red lines and no intellisense on any of the functions/types from the referenced files (including the "open" statements).
Is there something else I need to set up in the script file to make the intellisense work? (The answer might be pretty basic since I have not used .fsx files before.)
I don't think there is a way to do this smoothly. A few things to consider:
INTERACTIVE is always defined when you are being processed by fsi.exe, whether you are a .fsx, .fs, #load'ed, whatever. COMPILED is similarly always defined when you are being processed by fsc.exe. I can see how the quoted phrase from the docs maybe doesn't make this totally crystal clear.
You can only declare namespaces in fsi from a #load'ed file
So if you want your file to declare a namespace, and to work as the single file in interactive, then the namespace has to be #ifdef'ed out. But that also means the namespace will be #ifdef'ed out when the file is #load'ed...
You might be able to work around this by conditionally declaring it as a module, not a namespace. Or perhaps creating additional, more granular defines. It will be tricky.
Trying to get source files to work properly as part of a compiled library and simultaneously as single-file scripts is not easy, and I don't think the tooling was designed with this scenario in mind. More common is to have all of your library files behave purely as library files, then use dedicated standalone scripts which #loads the .fs files they need. This keeps the driving code and the library code separate, and things fit together more cleanly.

Is there any downside for calling MATLAB's addpath many times?

My questions is if addpath is similar to #include in C. In C if you don't add #include guard (#ifndef ...) there will be multiple definitions of function. But it seems that MATLAB is handling this issue.
I was using this scheme not to call addpath many times:
try
f(sample args);
catch err
addpath('lib');
end
But now I think it's not necessary.
#include adds a specific header file. addpath merely adds a folder to the search path and does not add any code to your program. Think of it as adding directories to search for header files in C++ (e.g. in Visual Studio, it's "Additional Include Directories" and g++, it's implemented with -I).
Also, I think addpath checks if the folder has already been added, so you're really not doing anything with the repeated calls to addpath('lib').
Multiple calls to addpath do not create multiple functions, so from a correctness point of view there is no problem with using addpath multiple times.
However, addpath is a relatively slow operation. You shouldn't call it within a function that may be called many times during normal operation.
Edit:
Also, rather than relying on try/catch to check the current state of your path, you can check the path directly. See examples here: https://stackoverflow.com/a/8238096/931379.

Overriding macros using compiler options

I need to be override certain macro definition by my header file. And I am not allowed to change source code. And I have to use gcc, but if anyone is aware of something similar on any other compiler then also it will help.
Here is what I exactly need:
Lets say I have code base with lot of .c files. These .c files include .h files. After all the .h files have been included for each file I want the compiler to behave as if I have another extra.h file which I want to specify when invoking the compiler. What I do in that .h file is #undef some macro and re-define the macro the way I want them to be.
Note: I am aware of --preinclude option in gcc, but using --preinclude over-rides my extra.h by the .h of the original source code. What I need is some kind of post include option.
Unless you uniformly have one specific header that is always included last in the source files, this is going to be tricky.
I think the way I'd approach it, if I had to, would be:
Create a new directory, call it headers.
Put in there suitable dummy headers with the same name as the regular headers, which would contain #include "extra.h" at the end (or possibly #include <extra.h>, but I would try to avoid that).
The dummy headers would also include the original files by some mechanism, possibly even using #include "/usr/include/header.h" but preferably some other technique - such as #include "include/header.".
The extra.h header would always redefine all its macros - it would not have the normal #ifndef EXTRA_H_INCLUDED / #define EXTRA_H_INCLUDED / #endif multiple inclusion guards, so that each time it is included, it would redefine the relevant macros.
Consequently, extra.h cannot define any types. (Or, more precisely, if it does, those must be protected against multiple definition by multiple include guards; the key point is that the macros must be defined each time the file is included - a bit like <assert.h>.)
Each redefined macro would be explicitly protected by #undef REDEFINED_MACRO and then #define REDEFINED_MACRO ....
There is no point in testing whether the macro is defined before undefining it.
The build process would be modified to look in the headers directory before looking anywhere else. The compiler option would be -I./headers or something similar, depending on exactly where you locate the headers directory.
Depending on how you have decided to locate the normal versions of the headers, you might need another -I option (such as -I/usr if you've used #include "include/header.h" notation) to locate the standard headers again.
The upshot is that your private headers get used directly by the compiler, but they include the standard headers and then your extra.h header - thus achieving what you wanted without modifying the C source or the normal headers.
But there is something misguided about the whole attempt...you would be better off not trying this.
Makefile could be used to redefine the macros through the -U and -D compiler(gcc) options. But why redefine them after the originals are evaluated? I cannot think of a need for such a thing. Can you tell what are you hoping to achieve through this?
The requirement is to insert extra.h after all the other .h files in a .c file. So adding it at the end of each .h file will insert it between two .h files included in sequence inside a .c file, which is not the intention.
You can use sed/awk inside makefile(s) to:
- first create duplicate .c files inserting '#include "extra.h"' after other #include lines inside each of the .c files (it will be tedious/ticky to resole #ifdef blocks inside the .c files)
- then achieve your target compiling those duplicate .c files
- finally delete the duplicate .c files
You can use
-include file option of GCC, because of this feature:
If multiple -include options are given, the files are included in the order they appear on the command line.
So as I understand you must include ALL *.h files from the command line,- just keep your "extra.h" the last header in -include option list and you should get what you want.
There are two ways I can think of doing this according to your requirements, and both should be relatively simple, I hope.
The first way does not touch the source code at all, however it requires that each header file you are #undef'ing things from has a header guard. You can copy and concatenate every header file that you need to "change" things in into one monolithic file, your "extra.h" file. Then at the end of that file, go ahead and redefine all the macros you need. Then include this file when you compile. The header guards will prevent the original headers from being included. Obviously, there are a number of potential problems with this approach, and it certainly wouldn't work in general.
The second way is a lot cleaner and more reliable, but it requires you to edit the code directly, albeit non-intrusively. For each header you need to redefine things in, make a copy of that header with an ".orig" suffix or something, then edit the actual header file directly. After you are all done doing whatever you are doing, then just copy all the ".orig" files back into the actual headers before your customers obtain the code. I assume your requirements aren't so draconian that you can't change the code even temporarily.
If none of that works, then I doubt you are going to find an effective answer from anybody short of hacking GCC directly and adding a "-postinclude" option yourself.

Resources