Is it a best practice to list down all the object files in a C++ makefile and are wildcards acceptable? - makefile

I'm a Java developer learning C++. I'm using eclipse as my IDE and MinGW as my toolset. Is it considered a best practice to list down every single object in a makefile? Also, is it just as acceptable to use wildcards to include all the files?

The use of wildcards is common, and accepted, but not really good practice.
If extra source files get into your source directories, they could wind up causing conflicts or -- worse -- riding silently in your libraries as useless baggage (introns?). Also, if a needed source file goes missing, your linker will complain about a missing {function|typename|whatever} and it might not be obvious what file has been lost (not really a problem if you have good source control, but still annoying). Finally, if your build system is expected to produce different targets using different subsets of the source files, wildcards will require you to either divide your source directories Venn-diagram-style, or resort to filename conventions that do the same thing (gah!).
Maintaining explicit lists of object files in a makefile really isn't that hard to do, and it keeps things simple.

Related

Preprocess conditional arch/make file to get non-conditional file

I have a conditional makefile (well, actually I am dealing with the arch file that will be called when invoking make) that is quite involved and I would like to preprocess it to get rid of all the 'ifeq', 'ifneq' parts that only worsen the readability, in order to see better what is being actually done. I tried doing
make -n -d
where I get the whole calls to the compiler, but that is also a pain since then I need to separate manually all the flags. I just want to get my nice makefile with my separate FLAGS, DFLAGS, LIBS sentences etc etc.
(My apologies if this has been said anywhere, but I am unable to find it).
Thanks!

How can I add built-in rules to make?

Make(1) has built-in rules, such that for simple tasks you don't need a makefile at all. I can type make prog and if the current directory has a prog.c, make will do something useful.
I have a number of rules like this (e.g., how to make .pdf from .html) that apply in many projects. If I have a makefile in a directory, I can simply include my rules from a file. Is there a way to tell make to use this file always? Like a dot file that make would always include before doing anything else.
Make's rules are truly built-in, not read from a file. This has advantages (the entirety of make is one executable and you can copy it and install it anywhere and get identical behavior) and disadvantages (you can't modify the default rules without modifying the source code and recompiling--if you want to do that it's easy to do, though: see the default.c file in the sources).
You can specify an extra makefile (or makefiles) that should be parsed before the usual ones using an environment variable, though, so you can create a makefile with some extra rules, then (in your ~/.bashrc or whatever) set the MAKEFILES environment variable to the name of that file (or files) containing these extra rules (don't forget to export it).
Now every make invocation will load these rules as well.
You may discover, though, that this isn't quite what you'd hoped, because it could cause other makefiles to fail or act in bizarre ways (for example if you download open source packages and want to build them locally, etc.) If you do this just remember you did it, so in a few months if you run into issues you'll remember to try undoing it and see if it helps :-)

How many times does a Common Lisp compiler recompile?

While not all Common Lisp implementations do compilation to machine code, some of them do, including SBCL and CCL.
In C/C++, if the source files don't change, the binary output of a C/C++ compiler will also not change, assuming the underlying system remains the same.
In a Common Lisp compiler, the compilation is not under the user's direct control, unlike C/C++. My question is that if the Lisp source files haven't changed, under what circumstances will a CL compiler compile the code more than once, and why? If possible, a simple illustrative example would be helpful.
I think that the question is based on some misconceptions. The compiler doesn't compile files, and it's not something that the user has no control over. The compiler is quite readily available through the compile function. The compiler operates on code, not on files. E.g., you can type at the REPL
CL-USER> (compile nil (list 'lambda (list 'x) (list '+ 'x 'x)))
#<FUNCTION (LAMBDA (X)) {100460E24B}>
NIL
NIL
There's no file involved at all. However, there is also a compile-file function, but notice that its description is:
compile-file transforms the contents of the file specified by
input-file into implementation-dependent binary data which are placed
in the file specified by output-file.
The contents of the file are compiled. Then that compiled file can be loaded. (You can also load uncompiled source files, too.) I think your question might boil down to asking under what circumstances would compile-file generate a file with different contents. I think that's really implementation dependent, and it's not really predictable. I don't know that your characterization of compilers for other languages necessarily holds either:
In C/C++, if the source files don't change, the binary output of a
C/C++ compiler will also not change, assuming the underlying system
remains the same.
What if the compiler happens to include a timestamp into the output in some data segment? Then you'd get different binary output every time. It's true that some common scripted compilation/build systems (e.g., make and similar) will check whether previous output can be reused based on whether the input files have changed in the meantime. That doesn't really say what the compiler does, though.
The rules are pretty much the same, but in Common Lisp, it's not a practice to separate declarations from implementation, so usually you must recompile every dependency to be sure. This is a shared practical consequence of dynamic environments.
Imagining there was such separation in place, the following are blantant examples (clearly not exhaustive) of changes that require recompiling specific dependent files, as the output may be different:
A changed package definition
A changed macro character or a change in its code
A changed macro
Adding or removing a inline or notinline declaration
A change in a global type or function type declaration
A changed function used in #., defvar, defparameter, defconstant, load-time-value, eql specializer, make-load-form generated code, defmacro et al (e.g. setf expanders)...
A change in the Lisp compiler, or in the base image
I mean, you can see it's not trivial to determine which files need to be recompiled. Sometimes, the answer is "all subsequent files", e.g. changing the " (double-quotes) macro-character, which might affect every literal string, or the compiler evolved in a non-backwards compatible way. In essence, we end where we started: you can only be sure with a full recompile and not reusing fasls across compilations. And sometimes it's faster than determining the minimum set of files that need to be recompiled.
In practice, you end up compiling single definitions a lot in development (e.g. with Slime) and not recompiling files when there's a fasl as old or younger than the source file. Many times, you reuse files from e.g. Quicklisp. But for testing and deployment, I advise clearing all fasls and recompiling everything.
There have been efforts to automate minimum dependency compilation with SBCL, but I think it's too slow when you change the interim projects more often that not (it involves a lot of forking, so in Windows it's either infeasible or very slow). However, it may be a time saver for base libraries that rarely change, if at all.
Another approach is to make custom base images with base libraries built-in, i.e. those you always load. It'll save both compilation and load times.

How to organize the build-process with Makefiles for Code in several directories

My Fortran-Code is structured as follows:
There are two folders (with several subdirectories)
1.
/home/user/general_part
where some very general files are located and which should be used in several versions of the program.
files: (with relative path)
- mainsubdir/main.F
- subdir1/file1.F
- subdir1/headerfile1.h
2.
/home/user/special_part/special_case1
where the files located which are case-dependend.
files: (with relative path)
- subdir2/file2.F
- subdir2/headerfile2.h
- subdir3/file3.F
How could I organize the build-process?
Should I use several makefiles in each of the directories?
Where should the object-files be located (especially the ones from the general files)?
My aim would be that I can start the build-process from the directory:
/home/user/special_part/special_case
with a simple make or a little script.
So at the end it should be possible that I can build a program always with the general files from 1. and several special-case files located in:
/home/user/special_part/special_case1
/home/user/special_part/special_case2
...
The reason why nobody is answering, is probably because the question is too general. Be more specific.
Say something like: "this is the program I want to build, and this is my makefile, please critique my makefile".
You can organize it any way you like, as long as it's logical and consistent. I put some beginner guidelines at
https://stackoverflow.com/questions/19816058/makefile-fibonacci/19821801#19821801
No. Make is really designed and works best, with a single makefile. You can have relevant makefile fragments in each directory, which are included in the main makefile. Do not have complete makefiles in each subdirectory. Google for the classic paper "Recursive make considered harmful" to see why that is so.
You can place your result anywhere you want, some people, place results alongside sources, some, in a separate directory. Just place results in some logical and consistent way. Same goes for intermediate files, such as object files.

Syntax Checking with unsupported languages

I have some files that have a particular syntax that is similar to ada (not identical though), however I would like to verify the syntax before going and running them. There isn't a compiler for these files, so I can't check them before using them. I tried to use the following:
gcc -c -gnats <file>
However this says compilation unit expected. I've tried a few variations on this, but to no avail.
I just want to make sure the file is syntactically correct before using it, but I'm not sure how to do it, and I really don't want to write an entire syntax checker just for this.
Is there some way to include an additional unsupported language to gcc without going through a recompile? Also is this simply a file that details to gcc what the syntax constructs are, or what would be entailed? I don't need a full compile, only a syntax check.
Alternately, are there any syntax checkers I can use that I can update an ada syntax check with the small number of changes required for this language?
I've listed Ada as a tag, since the syntax is nearly identical, and finding something that will do ada syntax checking without compiling will be a 90% solution for me.
You could try running the files through gnatchop first. The GCC Ada compiler is rather unique in that it expects filenames to match up with the main unit names inside the file. That may be what your error message is trying to say.
gnatchop will go through any files you give it and write out Ada source files with the appropriate names to make gcc happy (even splitting files into multiple files if needed).
Another option you might be interested in is OpenToken. It is a parser construction toolkit, written in Ada, that allows you to build your own parsers fairly easily. It comes with a syntax recognizer for Ada, so you may just be able to tweak that a bit for your needs.

Resources