Rcpp and custom header includes - include

I have made an R package using Rcpp and would like to include a header and a cpp file to my cpp function.
I placed it in the src file, next to my function-cpp file. Now I include it with #include "header.h"
But the compiler fails to include it. Only when I provide the absolut path to the file it works.
I tested with path's without empty spaces and I changed the windows backslash to forward slash but still no luck.
When going to the online help's I have found nothing that helps me ..
Can you tell me what to do?
Including standard header file via <> works fine ..

Related

Makefile multiple target patterns

When trying to run my makefile (https://pastebin.com/CYqsYtj9), I run into an error:
C:/STM32_Projects/blink_project/test/cpputest/build/MakefileWorker.mk:485: *** multiple target patterns. Stop.
I'm going to link the MakefileWorker.mk as well (https://pastebin.com/5JSy3HsB), even tho I'm pretty sure it's written properly since it's from https://github.com/cpputest/
So, question: why am I getting this error?
EDIT:
When I'm trying to make the makefile from Cygwin CLI, the error I get is the following:
C://STM32_Projects/blink_project//test/cpputest/build/MakefileWorker.mk:485: *** target pattern contains no '%'. Stop.
EDIT 2(Minimal, verifiable example):
// test/main.cpp
#include "CppUTest/CommandLineTestRunner.h"
int main (int ac, char ** av){
return CommandLineTestRunner::RunAllTests(ac,av);
}
so this is a simple main.cpp that my makefile should compile, other than that, you need a full repo from https://github.com/cpputest/cpputest compiled as shown in README in that repo.
Well, as mentioned I'm no expert when it comes to Windows and I know even less about cygwin.
But, my suspicion is that the environment you're trying to use is not well-supported or well-tested on Windows. Here's what I see:
You set:
PROJECT_DIR=C:/STM32_Projects/blink_project
TEST_DIR=$(PROJECT_DIR)/test
TEST_SRC_DIRS=$(TEST_DIR)
which means TEST_SRC_DIRS is set to C:/STM32_Projects/blink_project/test.
Next in MakefileWorker.mk you get an error because TEST_DEPS is set to objs/C:/STM32_Projects/blink_project/test/main.o ... which, I hope we can all easily see, is a totally bogus path.
So how did we get this path? Let's look backwards and see:
At line 484 we see how TEST_DEPS is set:
TEST_DEPS = $(TEST_OBJS) $(MOCKS_OBJS) ...
So, it's likely the bogus path is in TEST_OBJS. That is set at line 380:
TEST_OBJS = $(call src_to_o,$(TEST_SRC))
First, what's TEST_SRC? It's set on line 379 as the result of the user-defined macro get_src_from_dir_list with TEST_SRC_DIRS (which you set above). That macro will locate all the source files, so files that match *.cpp, *.cc, and *.c in each directory in TEST_SRC_DIRS. So based on the info output we can infer that this returns:
C:/STM32_Projects/blink_project/test/main.cpp
which is to be expected. Then we pass that to src_to_o which is a user-defined macro on line 363, which uses another user-defined macro src_to defined on line 362:
src_to = $(addprefix $(CPPUTEST_OBJS_DIR)/,$(call __src_to,$1,$2))
Oho!! Well, CPPUTEST_OBJS_DIR is set to obj at line 226, so this is very likely the start of our problem.
In a POSIX filesystem where there is no such abomination as a "drive letter", paths are fully constructable: if you have a fully-qualified path you can create another path by adding a new path as a prefix. That can't work on Windows, where the drive letter must always come first.
But let's keep going. So we see where the obj/ prefix comes from, now where does the rest of the path come from? That's the expansion of a user-defined macro __src_to defined at line 361 that starts with our source path above. What does this do? It wants to replace the .cpp extension with .o. But note this bit here:
$(if $(CPPUTEST_USE_VPATH),$(notdir $2),$2)
If CPPUTEST_USE_VPATH is true then we return notdir of the path; this would strip out all the drive and directory info and return just main.cpp which would turn into main.o and then the result would be obj/main.o, which would work fine.
But if CPPUTEST_USE_VPATH is false (which it is by default), we don't strip off the path and obj is added to the full path with the drivespec and you get a syntax error.
So. Basically, the cpputest makefiles have issues on Windows. It looks like you have one of these choices:
Either set the CPPUTEST_USE_VPATH option to Y to enable it (and read about what that option means and how to use it in the cpputest docs),
Or, remove the drive specs from your paths in your makefile and just use:
PROJECT_DIR = /STM32_Projects/blink_project
which means you need to be careful that your current drive is C: before you invoke this else it won't be able to find things.
HTH!

How can I make gcc search for #include <...> files in the current source file's directory?

When a file contains an include line like this:
#include "name.h"
gcc (and other C compilers) search for the file "name.h" in the directory containing the file being compiled. This does not by default happen if the line looks like this:
#include <name.h>
Is there an option to gcc to make it behave this way in the latter case too? As noted in the gcc documentation, "-I. searches the compiler's current working directory for header files. That may or may not be the same as the directory containing the current file." In the case I am working on (importing external code that used a build environment that automatically added the containing directory to the search path into a system that doesn't have such a build facility), the current directory is unfortunately not the same. What can I do? I'd rather not have to specifically modify the files...

Can I select an #include path using a script and command line args in an Inno Setup installer?

So the problem arises where I have a number of installations where most everything is the same except of course the files in the install. I have a suite of include files that are different.
So I thought, "Hey, lets simply add a command line argument to specify what file to include." I can get information from the command line argument in the Pascal code.
The problem comes when I try to use the information in the #include. The preprocessor seems to know nothing about the Pascal scripting. That makes sense, except that I want it to know. for example, I can't do this:
[Files]
#include "{code:GetMyArgument}"
or this:
[Files]
#include {param:foo|bar}
So the question is really: How can I set a #include to include a path file that I set in the command line arguments? or some other dynamic method... I can think of one. I just don't like my way: I don't like the idea of moving files around or changing file contents on the fly for this, my/this/these solutions smell, I think. Is there a better way?
I am on version 5.5.6(u) of Inno Setup.
Just use a preprocessor variable:
#include IncludePath
And specify its value on compiler's command-line:
ISCC.exe Example1.iss /DIncludePath=Other.iss
Meaning of the /D switch:
/D<name>[=<value>] Emulate #define public <name> <value>
If you are using an Inno Setup IDE that does not support setting compiler's command-line arguments (like Inno Script Studio), you can base the included script's filename on some installer's option, like AppId, AppName, OutputBaseFilename, etc.
For example for a name based on the AppName, use:
#include SetupSetting("AppName") + ".iss"
Note that this works only if the #include directive, with the call to SetupSetting preprocessor function, is after the respective [Setup] section directive.
Yet another option is to reverse the include.
A main .iss is project-specific and it includes a shared .iss:
Project-specific .iss:
; Project-specific settings
[Setup]
AppId=id
AppName=name
[Files]
; Project specific files
; Include shared script
#include "shared.iss"
Note that it's perfectly OK, if the sections repeat. So the shared.iss can include again both [Setup] and [Files] sections with other directives and files.

gcc: passing list of preprocessor defines

I have a rather long list of preprocessor definitions that I want to make available to several C programs that are compiled with gcc.
Basically I could create a huge list of -DDEF1=1 -DDEF2=2 ... options to pass to gcc, but that would create a huge mess, is hard to use in a versioning-system and may at some time in the future break the command line length limit.
I would like to define my defines in a file.
Basically the -imacros would do what I want except that it only passes it to the first source file: (below from the gcc documentation):
-include file Process file as if #include "file" appeared as the first line of the primary source file. However, the first directory searched
for file is the preprocessor's working directory instead of the
directory containing the main source file. If not found there, it is
searched for in the remainder of the #include "..." search chain as
normal. If multiple -include options are given, the files are included
in the order they appear on the command line.
-imacros file Exactly like -include, except that any output produced by scanning file is thrown away. Macros it defines remain defined.
This allows you to acquire all the macros from a header without also
processing its declarations. All files specified by -imacros are
processed before all files specified by -include.
I need to have the definitions available in all source files, not just the first one.
Look at the bottom of this reference.
What you might want is the #file option. This option tells GCC to use file for command-line options. This file can of course contain preprocessor defines.
Honestly - it sounds like you need to do a bit more in your build environment.
For example, one suggestion is that it sounds like you should create a header file that is included by all your source files and #define all your definitions.
You could also use -include, but specify an explicit path - which should be determined in your Makefile/build environment.
The -imacros would work, if your Makefile were building each source file independently, into its own object file (which is typical). Its sounds like you're just throwing all the sources into building a single object.

Why would one use #include_next in a project?

To quote the iOS Documentation on Wrapper Headers:
#include_next does not distinguish between <file> and "file" inclusion, nor does it check that the file you specify has the same
name as the current file. It simply looks for the file named, starting
with the directory in the search path after the one where the current
file was found.
The use of `#include_next' can lead to great confusion. We recommend
it be used only when there is no other alternative. In particular, it
should not be used in the headers belonging to a specific program; it
should be used only to make global corrections along the lines of
fixincludes.
So, two questions, what is #include_next, and why would you ever need to use it?
It is used if you want to replace a default header with one of your own making, for example, let's say you want to replace "stdlib.h". You would create a file called stdlib.h in your project, and that would be included instead of the default header.
#include_next is used if you want to add some stuff to stdlib.h rather than replace it entirely. You create a new file called stdlib.h containing:
#include_next "stdlib.h"
int mystdlibfunc();
And the compiler will not include your stdlib.h again recursively, as would be the case with plain a #include, but rather continue in other directories for a file named "stdlib.h".
It's handy if you're supporting multiple versions of something. For example, I'm writing code that supports PostgreSQL 9.4 and 9.6. A number of internal API changes exist, mostly new arguments to existing functions.
Compatibility headers and wrapper functions
I could write compatibility headers with static inline wrapper functions with new names for everything, basically a wrapper API, where I use the wrapper name everywhere in my code. Say something_compat.h with:
#include "something.h"
static inline something*
get_something_compat(int thingid, bool missing_ok)
{
assert(!missing_ok);
return get_something(thingid);
}
but it's ugly to scatter _compat or whatever suffixes everywhere.
Wrapper header
Instead, I can insert a compatibility header in the include path when building against the older version, e.g. compat94/something.h:
#include_next "something.h"
#define get_something(thingid, missing_ok) \
( \
assert(!missing_ok), \
get_something(thingid) \
)
so the rest of the code can just use the 9.6 signature. When building against 9.4 we'll prefix -Icompat94 to the header search path.
Care is required to prevent multiple evaluation, but if you're using #include_next you clearly don't mind relying on gcc. In that case you can also use statement expressions.
This approach is handy when the new version is the "primary" target, but backward compatibility for an older version is desired for some limited time period. So you're deprecating the older versions progressively and trying to keep your code clean with reference to the current version.
Alternatives
Or be a sensible person, use C++, and use overloaded functions and template inline functions :p
include_next is used as a preprocessor directive to tell the compiler to exclude the search paths up to and including filename file.h from resolving to this header file. The typical need is when two header files of the same name need to be used. Use such features sparingly and only when absolutely necessary.
For example:
source file.c contents with the usual file.h from path 1:
#include <file.h>
int main() {
printf("out value: %d", out_val);
exit 0;
}
file.h header file in path 1 contents with file.h from path 2 included:
include_next instructs that path 1 sub directory not be used as search path for file.h and instead use path 2 sub directory as search path. This way you can have 2 files of the same name without the fear of invoking a circular reference to itself.
# include_next <file.h>
int out_val = UINT_MAX - INT_MAX;
file.h in path 2 contents
#define INT_MAX 1<<63 - 1
#define UINT_MAX 1<<64 - 1

Resources