How to "weakly" #include a configuration file? - include

I'm writing library routines of which some characteristics can be tailored through #include'ing a configuration file. However, I'd like this configuration file to be optional, some default parameters being provided in the source. Here is a typical source beginning:
#include "userconf.h"
#ifndef BUFSIZE
#define BUFSIZE 100
#endif
...
where file userconf.h, if it exists, contains:
#define BUFSIZE 255
Standard compilers (gcc or others) consider a missing #include file an error (and they're right!). In this case, and only for this line, I'd like the compiler to continue without objection since default values are provided for parameters expected from the missing configuration file.
Is there a way to do this?
Note: I don't mind checking for this existence of the file from the make system (I'm using CMake) and passing a -Doption if that's easier to do (but, please, provide CMake directives for it, I'm not familiar with it and open documentation gives a hard time to grab the whole picture).

I found the answer "by accident" when including the wrong file.
You make profit of the multiple include directory search feature of the compiler (aka. -I option). You arrange your -I option list in such a way that there is a directory, preferentially at the end of the chain, containing a skeletal version of the configuration file.
For example, directory default/ contains the following userconf.h:
#define BUFSIZE 255
and the developement directory develop/ contains the live userconf.h:
#define BUFSIZE 100
Then, the source file of the original question may be compiled with
gcc source.c -Idevelop -Idefault ...
and -Idevelop may be omitted if it is part of the default searched directories.

Related

Can I select an #include path using a script and command line args in an Inno Setup installer?

So the problem arises where I have a number of installations where most everything is the same except of course the files in the install. I have a suite of include files that are different.
So I thought, "Hey, lets simply add a command line argument to specify what file to include." I can get information from the command line argument in the Pascal code.
The problem comes when I try to use the information in the #include. The preprocessor seems to know nothing about the Pascal scripting. That makes sense, except that I want it to know. for example, I can't do this:
[Files]
#include "{code:GetMyArgument}"
or this:
[Files]
#include {param:foo|bar}
So the question is really: How can I set a #include to include a path file that I set in the command line arguments? or some other dynamic method... I can think of one. I just don't like my way: I don't like the idea of moving files around or changing file contents on the fly for this, my/this/these solutions smell, I think. Is there a better way?
I am on version 5.5.6(u) of Inno Setup.
Just use a preprocessor variable:
#include IncludePath
And specify its value on compiler's command-line:
ISCC.exe Example1.iss /DIncludePath=Other.iss
Meaning of the /D switch:
/D<name>[=<value>] Emulate #define public <name> <value>
If you are using an Inno Setup IDE that does not support setting compiler's command-line arguments (like Inno Script Studio), you can base the included script's filename on some installer's option, like AppId, AppName, OutputBaseFilename, etc.
For example for a name based on the AppName, use:
#include SetupSetting("AppName") + ".iss"
Note that this works only if the #include directive, with the call to SetupSetting preprocessor function, is after the respective [Setup] section directive.
Yet another option is to reverse the include.
A main .iss is project-specific and it includes a shared .iss:
Project-specific .iss:
; Project-specific settings
[Setup]
AppId=id
AppName=name
[Files]
; Project specific files
; Include shared script
#include "shared.iss"
Note that it's perfectly OK, if the sections repeat. So the shared.iss can include again both [Setup] and [Files] sections with other directives and files.

gcc: passing list of preprocessor defines

I have a rather long list of preprocessor definitions that I want to make available to several C programs that are compiled with gcc.
Basically I could create a huge list of -DDEF1=1 -DDEF2=2 ... options to pass to gcc, but that would create a huge mess, is hard to use in a versioning-system and may at some time in the future break the command line length limit.
I would like to define my defines in a file.
Basically the -imacros would do what I want except that it only passes it to the first source file: (below from the gcc documentation):
-include file Process file as if #include "file" appeared as the first line of the primary source file. However, the first directory searched
for file is the preprocessor's working directory instead of the
directory containing the main source file. If not found there, it is
searched for in the remainder of the #include "..." search chain as
normal. If multiple -include options are given, the files are included
in the order they appear on the command line.
-imacros file Exactly like -include, except that any output produced by scanning file is thrown away. Macros it defines remain defined.
This allows you to acquire all the macros from a header without also
processing its declarations. All files specified by -imacros are
processed before all files specified by -include.
I need to have the definitions available in all source files, not just the first one.
Look at the bottom of this reference.
What you might want is the #file option. This option tells GCC to use file for command-line options. This file can of course contain preprocessor defines.
Honestly - it sounds like you need to do a bit more in your build environment.
For example, one suggestion is that it sounds like you should create a header file that is included by all your source files and #define all your definitions.
You could also use -include, but specify an explicit path - which should be determined in your Makefile/build environment.
The -imacros would work, if your Makefile were building each source file independently, into its own object file (which is typical). Its sounds like you're just throwing all the sources into building a single object.

Why would one use #include_next in a project?

To quote the iOS Documentation on Wrapper Headers:
#include_next does not distinguish between <file> and "file" inclusion, nor does it check that the file you specify has the same
name as the current file. It simply looks for the file named, starting
with the directory in the search path after the one where the current
file was found.
The use of `#include_next' can lead to great confusion. We recommend
it be used only when there is no other alternative. In particular, it
should not be used in the headers belonging to a specific program; it
should be used only to make global corrections along the lines of
fixincludes.
So, two questions, what is #include_next, and why would you ever need to use it?
It is used if you want to replace a default header with one of your own making, for example, let's say you want to replace "stdlib.h". You would create a file called stdlib.h in your project, and that would be included instead of the default header.
#include_next is used if you want to add some stuff to stdlib.h rather than replace it entirely. You create a new file called stdlib.h containing:
#include_next "stdlib.h"
int mystdlibfunc();
And the compiler will not include your stdlib.h again recursively, as would be the case with plain a #include, but rather continue in other directories for a file named "stdlib.h".
It's handy if you're supporting multiple versions of something. For example, I'm writing code that supports PostgreSQL 9.4 and 9.6. A number of internal API changes exist, mostly new arguments to existing functions.
Compatibility headers and wrapper functions
I could write compatibility headers with static inline wrapper functions with new names for everything, basically a wrapper API, where I use the wrapper name everywhere in my code. Say something_compat.h with:
#include "something.h"
static inline something*
get_something_compat(int thingid, bool missing_ok)
{
assert(!missing_ok);
return get_something(thingid);
}
but it's ugly to scatter _compat or whatever suffixes everywhere.
Wrapper header
Instead, I can insert a compatibility header in the include path when building against the older version, e.g. compat94/something.h:
#include_next "something.h"
#define get_something(thingid, missing_ok) \
( \
assert(!missing_ok), \
get_something(thingid) \
)
so the rest of the code can just use the 9.6 signature. When building against 9.4 we'll prefix -Icompat94 to the header search path.
Care is required to prevent multiple evaluation, but if you're using #include_next you clearly don't mind relying on gcc. In that case you can also use statement expressions.
This approach is handy when the new version is the "primary" target, but backward compatibility for an older version is desired for some limited time period. So you're deprecating the older versions progressively and trying to keep your code clean with reference to the current version.
Alternatives
Or be a sensible person, use C++, and use overloaded functions and template inline functions :p
include_next is used as a preprocessor directive to tell the compiler to exclude the search paths up to and including filename file.h from resolving to this header file. The typical need is when two header files of the same name need to be used. Use such features sparingly and only when absolutely necessary.
For example:
source file.c contents with the usual file.h from path 1:
#include <file.h>
int main() {
printf("out value: %d", out_val);
exit 0;
}
file.h header file in path 1 contents with file.h from path 2 included:
include_next instructs that path 1 sub directory not be used as search path for file.h and instead use path 2 sub directory as search path. This way you can have 2 files of the same name without the fear of invoking a circular reference to itself.
# include_next <file.h>
int out_val = UINT_MAX - INT_MAX;
file.h in path 2 contents
#define INT_MAX 1<<63 - 1
#define UINT_MAX 1<<64 - 1

Pre-processing C code with GCC

I have some C source files that need to be pre-processed so that I can use another application to add Code Coverage instrumentation code in my file.
To do so, I use GCC (I'm using this on a LEON2 processor so it's a bit modified but it's essentially GCC 3.4.4) with the following command line:
sparc-elf-gcc -DUNIT_TEST -I. ../Tested_Code/0_BSW/PKG_CMD/MEMORY.c -E > MEMORY.i
With a standard file this works perfectly, but this one the programmer used a #ifndef UNIT_TEST close and no matter what I do the code won't be pre-processed. I don't understand why since I'm passing -DUNIT_TEST to GCC explicitly defining it.
Does anyone have any clue what could cause this? I checked the resulting .i file and as expected my UNIT_TEST code is not present in it so I get an error when instrumenting it.
Anything wrapped in an #ifndef will only be parsed if it's NOT defined so you need to remove that definition to get all the code that is inside that block. GCC can't spit out preprocessed info for all the #ifdef and #ifndef if at preprocessing times symbols are/aren't defined.

Any way to disable `tempnam' is dangerous, better use `mkstemp' gcc warning?

I'm using tempnam() only to get the directory name, so this security warning does not apply to my case. How can I disable it? I couldn't find any switches to do it.
If you really only want the directory name, use the string constant macro P_tmpdir, defined in <stdio.h>.
"The tempnam() function returns a pointer to a string that is a valid filename, and such that a file with this name did not exist when tempnam() checked."
The warning arises because of the race condition between checking and a later creating of the file.
You want to only get the directory name? What should that be good for?
Like stranger already said, you may disable this (and similar warnings) using -Wno-deprecated-declarations.
The answer is no, because - on many systems - the GNU C library (glibc) which implements this function is compiled so as to trigger a linker warnings when it is used.
See:
GCC bug page regarding this issue - I filed this a short while ago.
GNU ld bug page regarding this issue - filed in 2010!
GNU ld bug page suggesting an approach for resolving the issue - I filed this a short while ago.
Note that the problem is not specific to GCC - any linker is supposed to emit this warning, its trigger is "hard-coded" in the compiled library.
If you want to create a temporary directory that's unique for the process, you can use mkdtemp.
This can, e.g., be useful to create FIFOs in there, or when a program needs to create lots of temporary files or trees of directories and files: Then they can be put into that directory.
As linker warning it may be obfuscated by using this answer's ASM workaround/hack:
https://stackoverflow.com/a/29205123/2550395
Something like this (quick and dirty):
#include <stdio.h>
#include <fcntl.h>
#include <sys/stat.h>
char my_file[20];
#define __hide_section_warning(section_string) \
__asm__ (".section " section_string "\n.string \"\rquidquid agis prudenter agas et respice finem \"\n\t.previous");
/* If you want to hide the linker's output */
#define hide_warning(symbol) \
__hide_section_warning (".gnu.warning." #symbol)
hide_warning(tmpnam)
tmpnam( my_file );
lock_fd = open( my_file, O_CREAT | O_WRONLY, (S_IRUSR | S_IWUSR | S_IRGRP) );
However, it still will leave a trace in the Make.p file and therefore isn't perfectly clean, besides already being a hack.
PS: It works on my machine ¯\_(ツ)_/¯
You can use GCC's -Wno-deprecated-declarations option to disable all warnings like this. I suggest you handle the warning properly, though, and take the suggestion of the compiler.

Resources