I'm working on the project with the a structure similar to the following:
root/inc/foo/bar/
root/src
I've just started to use Google Protocol Buffers and when I compile the code I found that I need add foo/bar/file.h to the file.cc file in order for the code to find the header. I don't plan to commit the .h and .cc files to the repo since they get automatically generated. Is there a parameter I can give protoc to seperate the header/source files into different directories and add the correct path to the source file #includes?
Maybe you could append a script "mv foo.h foofolder/" after executing the protoc
Related
I'm facing the following scenario:
Existing project which uses cmake
External 3rdparty library which only comes with Makefiles
The difference of my situation compared to existing questions is that I don't need to have cmake to build the 3rdparty library via the Makefile. Instead, the 3rdparty library provides a library.mk Makefile which has variables like LIB_SRCS and LIB_INCS containing all source and header files required to compile the library.
My idea is to include the library.mk into the project's CMakeLists.txt and then adding those $(LIB_SRCS) and $(LIB_INCS) to target_sources().
My question: How can I include library.mk into the existing CMakeLists.txt to get access to the $(LIB_SRCS) and $(LIB_INCS) for adding them to target_sources()? I'm looking for something like this:
include("/path/to/library.mk") # Somehow include the library's `library.mk` to expose variables to cmake.
add_executable(my_app)
target_sources(
my_app
PRIVATE
main.c
$(LIB_SRCS) # Add 3rd-party library source files
$(LIB_INCS) # Add 3rd-party library header files
)
Using include() does not work as the library.mk is not a CMake list/file.
Since you can't be sure that your target system will even have Make on it, the only option is to parse the strings out of the .mk file, which might be easy if the variables are set directly as a list of filenames, or really hard if they are set with expansions of other variables, conditionals, etc. Do this with FILE(STRINGS) cmake doc.
Your plan will only work if the Makefiles are trivial, and do not set important compiler flags, define preprocessor variables, modify the include directory, etc. And if they really are trivial, skip the parsing, and just do something like aux_source_directory(<dir> <variable>) to collect all the sources from the library directory.
You might also consider building and maintaining a CMakeLists.txt for this third-party library. Do the conversion once, and store it as a branch off of the "vendor" main branch in your version control system. Whenever you update, update the vendor branch from upstream, and merge or rebase your modifications. Or just store it in your existing project, referring to the source directory of the 3rd-party stuff.
i have a huge arm-none-eabi (gcc) cmake project.
i would like to run the entire project and abort after the preprocessor.
so example:
if i have 500 headers and 600 source files, i would like to get 1100 additional files after the preprocessor, all in a preprocessed state.
eg
// my procject
main.cpp
src/dummy.cpp
src/etc.cpp
...
// my project after preprocessor
main.cpp
main_preprocessed.cpp
src/dummy.cpp
src/dummy_preprocessed.cpp
...
if i just add the compiler flag -E, the prepocessing happens and thells me 'linker input file unused because linking not done'. which is ok, but i dont get the preprocessed files.
just preprocessing one file is no good to me because i would need to add a lot of header files with -I, which takes a long time.
ADDITIONAL INFO:
also, what is important is that i need my project files preprocessed. additionally to my project files i do have some libraries from different manufacturers, those i build first into a static lib, then link against the lib.
It seem to me that scons targets are being generated not in declaration sequence. My problem is, I need to generate some code first, I'm using protoc to process a my.proto file into .h and .cc file, I need some pseudo code like this(what should the working code look like?)
import os
env=Environment(ENV=os.environ,LIBPATH='/usr/local/lib')
env.ShellExecute('protoc', '--outdir=. --out-lang=cpp', 'my.proto')//produces my.cc
myObj=Object('my.cc')//should wait until 'my.cc' is generated by protoc
Dependency(myObj, 'my.cc')
mainObj=Object('main.cpp')
My question is:
How to specify this ShellExecution of protoc in SConstruct/SConscript?
How to make sure that the compilation of 'main.cpp' depends on the existence of 'my.cc', in another word, wait until 'my.cc' is generated and then execute?
Your observations and assumptions are correct, SCons will not execute the single build commands in the order that you list them in the SConstruct files. It will run them based on the dependencies of the targets and source files in your build, either defined implicitly (header includes in C++, for example) or explicitly (via the Depends() method).
So you have to define and setup your dependencies correctly, such that SCons delivers the output that you want. For the special protoc case in your example, a special Builder exists that will help you to get the dependency graph right. It is available in our ToolsIndex, where also support for a variety of other languages and dialects can be found.
These special builders will emit the correct target nodes, e.g. when given a *.proto input file, and SCons is then able to automatically detect the dependency between the protoc input file and your main program if you say something like:
env=Environment(tools=['default','protoc'])
env.Protoc([], "test.proto")
env.Program('main', ['main.cpp'] + Glob('*.cc'))
The Glob('*.cc') will detect your *.cc files, coming out of the protoc Tool, and include them as dependencies for your final target main.
You can always write your own Builders and Emitters in SCons, which is the canonical way of making new tools/toolchains known to SCons dependency analysis. In the UserGuide, sect. "18 Writing Your Own Builders", and especially our ToolsForFools Guide you can find more infos about this.
I have 2 libraries that share same source files:
# src/lib_mt/Makefile.am:
libppb_la_SOURCES = rphs_mt.c timer_mt.c
# src/sipplib/Makefile.am:
libsipp_a_SOURCES = ../lib_mt/rphs_mt.c ../lib_mt/timer_mt.c
Each source file compiled twice. First for lib_mt with -fPIC, second for sipplib without -fPIC.
Object files for each library created in corresponding directory.
Eventually subdir-objects becomes default. How to keep current behavior for these 2 source files? Some explicit rule maybe?
There is no way to disable that the moment it becomes the default. What you can do instead is migrate this to a non-recursive Automake buildsystem. At that point, it will know that there are different targets compiling the same source files with different flags (it requires AC_PROG_CC_C_O to be called in configure.ac.)
Alternatively, the hacky version is to create a src/sipplib/rphs_mt.c file that only contains
#include "../libmt/rphs_mt.c"
so that it is actually a separate build target.
I recently decided to organize the files in my project directory. I moved the parsers I had for a few different file types into their own directory and also decided to use ocamlbuild (the as the project was getting more complicated and the simple shell script was not sufficient any longer).
I was able to successfully include external projects by modifying myocamlbuild with some basic rules (calling ocaml_lib, I'll use ocamlfind some other time), but I am stuck on how to include the folder as a module into the project properly. I created a parser.mlpack file and filled it with the proper modules to be included (eg, "parser/Date", et cetera), wrote a parser.mli in the root of the directory for their implementations, and modified the _tags file (see below).
During the compilation, the parser directory is traversed properly, and parser.cmi, parser.mli.depends were both created in the _build directory; as well as all *.cm[xio] files in the parsers subdirectory.
I feel I might be doing something redundant, but regardless, the project still cannot find the Parser module when I compile!
Thanks!
_tags
debug : true
<*.ml> : annot
"parser" : include
<parser/*.cmx>: for-pack(Parser)
<curlIO.*> : use_curl
<mySQL.*> : use_mysql
<**/*.native> or <**/*.byte> : use_str,use_unix,use_curl,use_mysql
compilation error
/usr/local/bin/ocamlopt.opt unix.cmxa str.cmxa -g -I /usr/local/lib/ocaml/site-lib/mysql mysql.cmxa -I /usr/local/lib/ocaml/curl curl.cmxa curlIO.cmx utilities.cmx date.cmx fraction.cmx logger.cmx mySQL.cmx data.cmx project.cmx -o project.native
File "\_none\_", line 1, characters 0-1:
Error: **No implementations provided for the following modules:**
Parser referenced from project.cmx
Command exited with code 2.
You'll notice -I parser is not included in the linking phase above; actually none of the parser related files are included!
edit: Added new details from comments and answer below.
You need to "include" the parser directory in the search path. You can do this in _tags:
"parser": include
Then ocamlbuild can search the parser directory for interesting files.
I wonder if parser.mli is somehow interfering with the dependencies in processing the mlpack file. parser.cmi will be generated from the pack operation when parser.mlpack is processed and compiled. Try building with the parser.mli file removed. If that works, then this can be re-processed into a real answer.
Also, you don't need parser/ as a prefix to your modules in parser.mlpack if parser.mlpack is in the parser directory and you have the include tag set. But that shouldn't make a difference for this.
Update: this worked around the problem, but wasn't the root cause. Root cause, per comment below, was a file mentioned in the .mlpack that had been relocated.