Suppose that I have multiple libraries that I can build with cmake or with configure scripts, there is a tool that can help me with building this libraries such as I can easily manage the rebuilding of this libraries with few modifications like changing compiler's flags ?
I would like to run a sort of automated process a see the feedback about each build + some freedom about building options.
There is a tool like this one beside a conveniently created bash script ?
Make seems like the best tool to use here, but bash script would also work. You could use a makefile that calls the other makefiles with -f (or switch to the directory with -C ). Also, you could handle the flags and such within a single makefile with judicious use of variables, targets and recipes. Realize you can set make variables (and therefore flags) from the command line. That's about the most I can help without knowing more specifics of your situation. Good luck!
Related
gcc has so many warning options, and I want to turn a bunch of them off (I'm compiling a ton of old code that's known to work, but generates lots of warnings). I suppose I could turn off all warnings, but surely that's not advisable, but I also don't want to set my CFLAGS to contain 10's of -Wno_... flags that will appear on the screen as each module compiles.
What would be ideal would be to be able to put a bunch of these flags into a text file and reference that on the command line. Maybe that's possible, but I can't find an option for that in the manual pages. Anybody know if such an option exists?
Files containing commandline options to be used in the way you envisage
are often called response files.
GCC compilers support response files, which may contain any or
all of the commandline arguments (not just -opt options). The usage is:
gcc [args...] #file [more args...]
where file is a file of space-separated commandline arguments.
#file is documented in the GCC manual in 3.2 Options Controlling the Kind of Output
If you like response files for your personal builds, feel free.
But in professional build practice they are not popular. Consider that Exhibit A for
the the diagnosis of a build break or problematic build is the complete build
log, in which we hope to find the complete sequence of commands with
all of their arguments. #file frustrates that hope, and holds us up while we look for
file or ask somebody remote to send it to us, or post its contents (perhaps on Stackoverflow!)
When we see it, it can be difficult or impossible to eliminate the eventuality that what we are seeing is
not actually what was in file when the build was run. Build logs filled with
multi-kilobyte command lines are normal and are everyday reading for build engineers.
I am developing an OS for embedded devices that runs bytecode. Basically, a micro JVM.
In the process of doing so, I am able to compile and run Java applications to bytecode(ish) and flash that on, for instance, an Atmega1284P.
Now I've added support for C applications: I compile and process it using several tools and with some manual editing I eventually get bytecode that runs on my OS.
The process is very cumbersome and heavy and I would like to automate it.
Currently, I am using makefiles for automatic compilation and flashing of the Java applications & OS to devices.
All steps, roughly, for a C application are as follows and consist of consecutive manual steps:
(1) Use Docker to run a Linux container with lljvm that compiles a .c file to a .class file (see also https://github.com/davidar/lljvm/tree/master)
(2) convert this c.class file to a jasmin file (https://github.com/davidar/jasmin) using the ClassFileAnalyzer tool (http://classfileanalyzer.javaseiten.de/)
(3) manually edit this jasmin file in a text editor by replacing/adjusting some strings
(4) convert the modified jasmin file to a .class file again using jasmin
(5) put this .class file in a folder where the rest of my makefiles (the ones that already make and deploy the OS and class files from Java apps) can take over.
Current options seem to be just keep using makefiles but this is a bit unwieldly (I already have 5 different makefiles and this would further extend that chain). I've also read a bit about scons. In essence, I'm wondering which are some recommended tools or a good approach for complicated builds.
Hopefully this may help a bit, but the question as such could probably be a subject for a heated discussion without much helpful results.
As pointed out in the comments by others, you really need to automate the steps starting with your .c file to the point you can integrated it with the rest of your system.
There is generally nothing wrong with make and you would not win too much by switching to SCons. You'd get more ways to express what you want to do. Among other things meaning that if you wanted to write that automation directly inside the build system and its rules, you could also use Python and not only shell (should that be of a concern though, you could just as well call that Python code from make). But the essence of target, prerequisite, recipe is still there. And with that need for writing necessary automation for those .c to integration steps.
If you really wanted to look into alternative options. bazel might be of interest to you. The downside being the initial effort to write the necessary rules to fit your needs could be costly. And depending on size of your project, might just be too much. On the other hand once done with that, it'd be very easy to use (apply those rules on growing code base) and you could also ditch the container and rely on its more lightweight sand-boxing and external rules to get the tools and bits you need for your build... all with a single system for build description.
need some help with using a 3rd party makefile when building my own project.
There isn't a way to what you want directly. CMake doesn't provide a facility to include files into its generated files. (ie: include a "3rdparty.mk" into a CMake generated Makefile.) Nor can you directly include a "generated-file-type" (ie: a Makefile) into a CMakeLists.txt. CMake's ExternalProject won't let you do that either.
What you need to do is somehow "parse" the makefile that has the information that you desire. There are a myriad of ways that you can do this. For example, you could write a shell-script wrapper that would grep your makefile for what you need then construct a CMake command line with the variables you want defined, and output it or call cmake for you. Depending on how comfortable you are with shell (or perl, python, etc.) you might feel this is the best option.
If you know these values will never (or very rarely change), you can hard code them in to your CMakeLists.txt (not recommended) or into a file you can include() (better).
You could also stay in CMake-land and use CMake's ExternalProject to help you. Using ExternalProject, you can:
Fetch your 3rd party libraries (download, copy, unzip, etc)
Patch the Makefiles of these libraries
Run make on those patched makefiles
Now, this patch that I mentioned is something that you'd have to write yourself, and keep with the source of your primary project. The content of this patch would be a new target for make that would write a file that you could include in your CMakeLists.txt via include(). You could start simply, and have this new make target (eg: make output_variables) write a list of set() commands to lib_A.cmake. After comfortable with that, you could move on to more complicated output; like writing a lib_A-config.cmake file that CMake's find_package() would understand.
Granted, the last option is probably the most complicated but it might make maintenance of your primary project easier, reducing pain in the future. You'll also gain a deeper understanding of CMake.
i'm new year and I need some answer. I searched on the web to some answer but i didn't found anything usefull. What am i searching is for a shell programms that when you execute it, create a Makefile with the binary name in arguments like :
./automakefile.sh hello .
Will build you a Makefile with a binary name called hello.
I hope you guys will help me, i'm counting on you <3
There is, unfortunately, no such magic command. If there was, we wouldn't need Makefiles to start with because the magic would most likely have been incorporated in the compiler.
There are several reasons why there isn't a command like that.
Given a random binary file, you can't generally say what programming language it was written in.
You also can't tell what source file were used to compile the binary file from, or where in the file hierarchy they are located (not just where they were located when the binary file was compiled last time, maybe on another system).
You don't know the dependencies between the source code files. Makefiles are primarily useful for keeping track of these (and compiler flags etc.), so that changing one single source file in a big project does not trigger a recompilation of everything.
You don't know what compiler to use, or what flags to pass to it. This is another thing a Makefile contains.
There are build tools available for making the creation of Makefiles easier, and for making them portable between systems on different architectures (the Makefiles that is, not necessarily the programs, that's down to the programmer). One such set of tool is GNU's autotools, another is CMake, and I'm sure there are others as well, but those are the ones I use.
Now you're facing another but similar problem, and that is that you still need to learn the syntax of, and writ,e your Makefile.am and configure.ac files (for the GNU tools), or your CMakeLists.txt files (for CMake).
I have a large project which uses a recursive autotools structure. Most of the build time is spent on a single directory within this, so I want to make that directory build in parallel. I've found documentation related to make's -j option to enable parallel building, but the question is, where should I specify -j in my Makefile.am for the directory I am building?
I understand that it's better to use a non-recursive structure for parallel building, but that's too big a job for now, and I'm hoping there's still a way to make this one directory build in parallel.
It is not your task as a maintainer to specify the level of parallelism of the build, because it depends on the machine you are building on. Often passing the number of CPUs to -j is a good idea, but not always. What is supposed to happen is that a user just runs make with the appropriate -j flag. If you also happen to be that user and you are tired of passing -j explicitly all the time, then
export MAKEFLAGS=-j2
from your shell profile (e.g. .bashrc) and have make always consider this option.
Assuming a linux distribution, and further assuming you would like the builds to be adaptive, you might try:
export MAKEFLAGS=-j`nproc`