I would like to know the Uses of static pattern rules against normal rules in make. I an new to make and gone through some tutorials. I want to know when do we use this static pattern rules ? Could you please explain in brief ?
Thanks in Advance.
Your question is mostly a matter of opinion. Notice that there are several build automation tools (not only GNU make), e.g. also ninja, scons, omake, etc...
When you code in C (or in C++....) some project, you could have some C (or C++) files which are generated from something else (e.g. by lemon or by your own utility...). For such cases (pedantically you could call them metaprogramming), pattern rules could be useful (in particular if you have several such cases in a project). In other cases you generate other files (than object files) from C source (e.g. generating documentation with doxygen), and then pattern rules are also very useful.
An example of a large C++ project with many C++ code generators is the GCC compiler. And back when (in 2009) GCC was coded in C, it already had a dozen of specialized code generator programs emitting some C code. For these cases, pattern rules could be convenient.
Of course, pattern rules are a luxury. You could in principle generate your Makefile and have it contain a simple rule for each individual file. (in GCC, the Makefile-s are generated by autoconf and automake based things...)
If you observe and study the source code of most large free software projects, you'll find out that most of them do have generators for C (or C++) files. So generating C code is a usual practice (the original Unix from late 1970s did that already). Today, some software projects have most or even all (e.g. CAIA) of their C code generated.
Related
The GNU Makefile has a documentation page which lists standard implicit variables for various compilation contexts, such as CC, CFLAGS, etc. They are well defined, and pretty safe to employ (I use them all the time).
Looking though extended documentation, beyond the GNU website, I regularly see other variables which are not listed on the GNU documentation, such as COMPILER.c, LINK.o, etc.
Such variables are present in multiple recipes when looking over Github or Internet, and frequently from authors which seem to have a pretty good understanding regarding how make works.
The question is:
How reliable is it to use such variables?
They are not documented on the GNU make documentation website, but they seem stable enough that several authors have decided to rely on them. Is it a sane thing to do?
I'd say that they are documented and are pretty safe to use with GNU make (they are not in POSIX make).
However, the recipes in built-in implicit rules actually use variables such as COMPILE.c, LINK.p, and PREPROCESS.S, whose values contain the recipes listed above.
make follows the convention that the rule to compile a .x source file uses the variable COMPILE.x. Similarly, the rule to produce an executable from a .x file uses LINK.x; and the rule to preprocess a .x file uses PREPROCESS.x.
I'm trying to locate where __builtin_va_start is defined in GCC's source code, and see how it is implemented. (I was looking for where va_start is defined and then found that this macro is defined as __builtin_va_start.) I used cscope -r in GCC 9.1's source code directory to search the definition but haven't found it. Can anyone point where this function is defined?
That __builtin_va_start is not defined anywhere. It is a GCC compiler builtin (a bit like sizeof is a compile-time operator). It is an implementation detail related to the <stdarg.h> standard header (provided by the compiler, not the C standard library implementation libc). What really matters are the calling conventions and ABI followed by the generated assembler.
GCC has special code to deal with compiler builtins. And that code is not defining the builtin, but implementing its ad-hoc behavior inside the compiler. And __builtin_va_start is expanded into some compiler-specific internal representation of your compiled C/C++ code, specific to GCC (some GIMPLE perhaps)
From a comment of yours, I would infer that you are interested in implementation details. But that should be in your question
If you study GCC 9.1 source code, look inside some of gcc-9.1.0/gcc/builtins.c (the expand_builtin_va_start function there), and for other builtins inside gcc-9.1.0/gcc/c-family/c-cppbuiltin.c, gcc-9.1.0/gcc/cppbuiltin.c, gcc-9.1.0/gcc/jit/jit-builtins.c
You could write your own GCC plugin (in 2Q2019, for GCC 9, and the C++ code of your plugin might have to change for the future GCC 10) to add your own GCC builtins. BTW, you might even overload the behavior of the existing __builtin_va_start by your own specific code, and/or you might have -at least for research purposes- your own stdarg.h header with #define va_start(v,l) __my_builtin_va_start(v,l) and have your GCC plugin understand your __my_builtin_va_start plugin-specific builtin. Be however aware of the GCC runtime library exception and read its rationale: I am not a lawyer, but I tend to believe that you should (and that legal document requires you to) publish your GCC plugin with some open source license.
You first need to read a textbook on compilers, such as the Dragon book, to understand that an optimizing compiler is mostly transforming internal representations of your compiled code.
You further need to spend months in studying the many internal representations of GCC. Remember, GCC is a very complex program (of about ten millions lines of code). Don't expect to understand it with only a few days of work. Look inside the GCC resource center website.
My dead GCC MELT project had references and slides explaining more of GCC (the design philosophy and architecture of GCC changes slowly; so the concepts are still relevant, even if individual details changed). It took me almost ten years full time to partly understand some of the middle-end layers of GCC. I cannot transmit that knowledge in a StackOverflow answer.
My draft Bismon report (work in progress, funded by H2020, so lot of bureaucracy) has a dozen of pages (in its sections ยง1.3 and 1.4) introducing the internal representations of GCC.
I'm rewriting my Windows C++ Native Library (an ongoing effort since 2002) with public release in mind. For the past 10 years I've been the sole beneficiary of those 150+ KLOC and I feel others might find good uses for it also.
Currently the entire library is quite-templated and header only. That means all code is in the body of the classes. It's not very easy to manage but it's OK.
I'm VERY tempted, after reading several C++ library coding guidelines, to break it into .hpp + .inl files. Experimentally done so for a few classes and it does increase readability and makes it easier for others to deal with. I know where everything is at any given time. But other users might want to a quick overview of a classes declaration... and the definition only if necessary (debugging).
QUESTION:
What are the pros/cons of splitting the member definitions from the class' definition for a class template? Is there a commonly accepted practice.
This is important for me because it's a one way road. I can't refactor it the other way later on so any feedback matters...
I've found my answer in another question.
Question: When should I consider making a library header-only? - and answer is here^.
And the answer is I will break it into .cpp and .hpp files and make it ready to compiler both as header only and static library or DLL.
#Steve Jessop:
If you think your non-template library could be header-only, consider dividing it into two files anyway, then providing a third file that includes both the .h and the .cpp (with an include guard).
Then anyone who uses your library in a lot of different TUs, and suspects that this might be costing a lot of compile time, can easily make the change to test it.
^ this is an awesome idea. It will take a bit more work but it's SO versatile.
UPDATE
It's important to explicitly instantiate^ the templated classes in the .cpp files.
I see that Fortran has 'call' and 'include' statements. what is the difference between the two? Does the .i filetype have some significance?
i.e:
include 'somefile.i'
call 'somesubroutine.f'
Thanks!
INCLUDE statement lets you include source from some other file, as if it was in the file in which the statement is located. Its usefulness in organizing code is somewhat dubious, but some swear by it.
In F77 it was a common extension (from MIL-STD 1971, I believe), in F90 it made it into the Standard.
Filetypes have no significance. As the matter of fact, in fortran most filetypes (even the more common ones as f77, f90 and such) have no significance. Most compilers merely use them to automatically "detect" and differentiate free form from fixed form source code, but they also allow for other.
CALL statement is used for calling subroutines. It is of the form CALL subroutine_name(list-of-arguments). Subroutines are one of the more simple ways of structuring and dividing your program into logical sub-units.
But this is all relatively basic, and covered in every Fortran tutorial out there. Some are better, some are worse. A few good starting points for learners would be the Wikipedia page (not perfect, but not that bad either), FortranWiki and two books. One IRO-bot already mentioned, other would be Chapman's Fortran 95/2003 for scientists and engineers. It is generally considered more suitable for beginners, having an easy going approach and a plethora of practical examples, while Metcalf is aiming to be more of a reference book as well.
In general, file extensions don't have special meaning for the compiler. You can include any file, not just .i. Common extension for Fortran source files is .inc though.
include statement literally inserts a specified file into the source code.
call statement is very different, and it is used to call a Fortran subroutine.
These are very basic concepts. Before all, you should look up an introductory Fortran tutorial online (just try googleing it). Or get one of several great books on Fortran programming, e.g. Metcalf and Reid.
I'm a Java developer learning C++. I'm using eclipse as my IDE and MinGW as my toolset. Is it considered a best practice to list down every single object in a makefile? Also, is it just as acceptable to use wildcards to include all the files?
The use of wildcards is common, and accepted, but not really good practice.
If extra source files get into your source directories, they could wind up causing conflicts or -- worse -- riding silently in your libraries as useless baggage (introns?). Also, if a needed source file goes missing, your linker will complain about a missing {function|typename|whatever} and it might not be obvious what file has been lost (not really a problem if you have good source control, but still annoying). Finally, if your build system is expected to produce different targets using different subsets of the source files, wildcards will require you to either divide your source directories Venn-diagram-style, or resort to filename conventions that do the same thing (gah!).
Maintaining explicit lists of object files in a makefile really isn't that hard to do, and it keeps things simple.