make is not using -std=c++11 option for g++ - c++11

I am trying to compile c++ files using make. But, it is not using -std=c++11 flag by default. Whenever I need to compile a program which uses c++11 specific features, I have to explicitly compile it using g++.
So, I want to ask how can I have make automatically use the option -std=c++11 for all my c++ files on my system.
If I need to change some global makefile for g++ , what is the location of the makefile on Linux Mint 18 and what needs to be changed or added?
Or do I need to create a Makefile for myself?
EDIT 1: I am invoking make like make myfile
And there are only .cpp files and their binaries in the directory. I don't have any Makefile in the directory.
EDIT 2: Here, myfile is the name of the c++ file which I want to compile.
When I run make with the -d option, I get the following output (I can not paste all of the output as it is quite long and is exceeding the body size limit so, I am including the screenshots of the output).
Image 1
And this image(2) has some lines from the end.
Image 2
I intentionally made a change in the file "MagicalWord.cpp" so that make finds something to make!

There is no "global makefile" and there is no way to change the default flags for all invocations of make (unless you edit the source code to GNU make and compile it yourself, which is a bad idea in this situation).
In your makefile(s), add the line:
CXXFLAGS += -std=c++11
Assuming you're using the built-in rules for compiling things, or that you're using the standard variables with your own rules, that will do what you need.
If that doesn't work we'll need to see your makefile or at least the rules you use to build your C++ source files (things like the -d output aren't useful here--that would be interesting if files weren't being built, that you thought should be or similar).

Setting a system-wide language for all your C++ projects isn't necessarily a good idea. Instead, define a Makefile that specifies any compiler options you'd like:
CXXFLAGS := -std=c++11 $(CXXFLAGS)
The CXXFLAGS are then passed to your compiler when compiling a C++ program (assuming you're using the default GNU Make rules).
If the Makefile lives in your current working directory, you can now run make target in order to compile a target.cpp file into a target executable.
If the Makefile is in another directory, you must specify the path to it:
make -f path/to/your/Makefile target
If you want to add extra parameters just for one run, you can set an environment variable or a make variable on the command line:
# environment:
CXXFLAGS='-std=c++11' make target
# make variable:
make target CXXFLAGS='-std=c++11'
Any of these will cause the execution of g++ -std=c++11 target.cpp -o target or equivalent.
In theory you can edit your shell profile to export CXXFLAGS='-std=c++11' which will make that environment variable available to all programs you run. In practice, setting compiler options through environment variables tends to cause more problems than it solves.
Of all these solutions, just writing a normal Makefile is by far the easiest approach. That way, all of the build configuration is in one place and completely automated.

Related

Why does a target archive behave like a .PHONY target in a Makefile?

I have a simple Makefile that builds an archive, libfoo.a, from a single object file, foo.o, like this:
CC=gcc
CFLAGS=-g -Wall
AR=ar
libfoo.a: libfoo.a(foo.o)
foo.o: foo.c
The first time I run make, it compiles the C file, then creates an archive with the object file:
$ make
gcc -g -Wall -c -o foo.o foo.c
ar rv libfoo.a foo.o
ar: creating libfoo.a
a - foo.o
However, if I run make again immediately (without touching foo.o), it still tries to update the archive with ar r (insert foo.o with replacement):
$ make
ar rv libfoo.a foo.o
r - foo.o
Why does Make do this when it shouldn't have to? (If another target depends on libfoo.a, that target will be rebuilt as well, etc.)
According to the output of make -d, it seems to be checking for the non-existent file named libfoo.a(foo.o), and apparently decides to rerun ar r because of that. But is this supposed to happen? Or am I missing something in my Makefile?
You are seeing this because the people who put together your Linux distribution (in particular the people that built the ar program you're using) made a silly decision.
An archive file like libfoo.a contains within it a manifest of the object files contained in the archive, along with the time that the object was added to the archive. That's how make can know if the object is out of date with respect to the archive (make works by comparing timestamps, it has no other way to know if a file is out of date).
In recent times it's become all the rage to have "deterministic builds", where after a build is complete you can do a byte-for-byte comparison between it and some previous build, to tell if anything has changed. When you want to perform deterministic builds it's obviously a non-starter to have your build outputs (like archive files) contain timestamps since these will never be the same.
So, the GNU binutils folks added a new option to ar, the -D option, to enable a "deterministic mode" where a timestamp of 0 is always put into the archive so that file comparisons will succeed. Obviously, doing this will break make's handling of archives since it will always assume the object is out of date.
That's all fine: if you want deterministic builds you add that extra -D option to ar, and you can't use the archive feature in make, and that's just the way it is.
But unfortunately, it went further than that. The GNU binutils developers unwisely (IMO) provided a configuration parameter that allowed the "deterministic mode" to be specified as the default mode, instead of requiring it to be specified via an extra flag.
Then the maintainers of some Linux distros made an even bigger mistake, by adding that configuration option when they built binutils for their distributions.
You are apparently the victim of one of these incorrect Linux distributions and that's why make's archive management doesn't work for your distribution.
You can fix it by adding the -U option, to force timestamps to be used in your archives, when you invoke ar:
ARFLAGS += -U
Or, you could get your Linux distribution to undo this bad mistake and remove that special configuration parameter from their binutils build. Or you could use a different distribution that doesn't have this mistake.
I have no problem with deterministic builds, I think they're a great thing. But it loses features and so it should be an opt-in capability, not an on-by-default capability.

Disable optimizations for a specific file with autotools

I'm working on setting up autotools for a large code base that was once just a bash script compile and later just hand written Makefiles.
We have a set of files that require that compiler optimizations be turned off. These files are already in their own subdirectory, so they will have their own Makefile.am.
What's the proper way to drop any existing compiler optimizations and force a -O0 flag on the compiler for these specific files?
I went with Brett Hale's comment to use subpackages. I was able to insert
: ${CFLAGS="-O0"}
before AC_PROG_CC, which sets the appropriate optimization. The other solutions do not work, since the -g -O2 was getting added very last. You can never get another -O variable after it.
You don't have to remove existing optimizations: the last value of -O on the compiler invocation will be used, so it's good enough to just add -O0 at the end.
This is not directly supported by automake, but there's a trick you can use defined in the documentation.
Otherwise if you know you'll only ever invoke your makefile with GNU make you can play other tricks that are GNU make specific; you may have to disable automake warnings about non-portable content.

Trouble with simple makefile in C

I am somewhat of a beginner in C and have a project due where I need to include a makefile to compile my single file program that uses pthreads and semaphores. My makefile looks like:
# Makefile for pizza program
pizza: pizza.o
gcc -lrt -lpthread -g -o pizza pizza.o
pizza.o: pizza.c
gcc -lrt -lpthread -g -c pizza.o pizza.c
and I keep getting:
make: Nothing to be done for 'Makefile'.
I have done several makefiles before and have never gotten this message. I've tried different semantics in the makefile and have only gotten this same message. And yes, the command is tabbed after the target and dependency line.
Using gcc on tcsh. I have read other makefile posts on SO but I wasn't able to use any of the answers to figure it out for my case.
Any help would be greatly appreciated!
The arguments to make are the targets to be built.
You are running make Makefile which is telling make to try to build the Makefile target.
There is no such target in your makefile, make has no built-in rule that applies to that target and the file exists (and is assumed to be up-to-date) which is what that message is telling you.
To run the default target (by default the first target listed) you can just run make (assuming you are using a default name like Makefile for your makefile).
You can also use the -f argument to make to select an alternate makefile name.
So make -f Makefile will in this case (since Makefile is a default searched name) do the same thing as make.

With autoconf/automake, how do I specify include file paths?

Let's say I want to have the generate makefile pass some specific header paths to g++.
What do I need to add to configure.ac or Makefile.am to specify this?
(note - I do not want to pass it in the CPPFLAGS with ./configure. I want those paths baked in before that step)
EDIT:
Specifically, I want to to include let's say /usr/include/freetype and /mypath/include.
I put AC_CHECK_HEADERS([freetype/config/ftheader.h]) and it passes, but doesn't seem to add it to the -I passed to g++.
I also did try adding CPPFLAGS=-I.:/usr/include/freetype:/mypath/include, but it screws up and puts -I twice, the first as -I. and it ignores the 2nd -I.
Since the question was about what to put in an automakefile, I would have thought AM_CPPFLAGS was the right variable to use to add includes and defines for all C/C++ compiles. See http://www.gnu.org/software/automake/manual/html_node/Program-Variables.html
Example:
AM_CPPFLAGS = -I/usr/local/custom/include/path
Hard coding paths into the package files is absolutely the wrong thing to do. If you choose to do that, then you need to be aware that you are violating the basic rules of building a package with the autotools. If you specify /mypath/include in your package files, you are specifying things specific to your machine in a package that is intended to work on all machines; clearly that is wrong. It looks like what you want is for your package (when built on your machine) to look for header files in /mypath. That is easy to accomplish without bastardizing your package. There are (at least) 3 ways to do it:
Use a config.site file. In /usr/local/share/config.site (create this file if necessary), add the line:
CPPFLAGS="$CPPFLAGS -I/mypath/include"
Now any package using an autoconf generated configure script with the default prefix (/usr/local) will append -I/mypath/include to CPPFLAGS and the headers in /mypath/include will be found.
If you want the assignment to be made for all builds (not just those to be installed in /usr/local), you can use this:
Put the same line specifying CPPFLAGS in $HOME/config.site, and set CONFIG_SITE=$HOME/config.site in the environment of your default shell. Now, whenever you run an autoconf generated configure script, the assignments from $HOME/config.site will be made.
Simply specify CPPFLAGS in the environment of your default shell.
All of these solutions have two primary advantages over modifying your build files. First, they will work for all autoconf generated packages (as long as they follow the rules and don't do things like assigning user variables such as CPPFLAGS in the build files). Second, they do not put your machine specific information into a package that ought to work on all machines.

Change older makefile system to take advantage of parallel compiles

We use Microsoft NMAKE to compile a large number of native C++ and some Intel Fortran files. Typically the makefiles contains lines such as this (for each file):
$(LINKPATH)\olemisc.obj : ole2\olemisc.cpp $(OLEMISC_DEP)
$(CCDEBUG) ole2\olemisc.cpp
$(GDEPS) ole2\olemisc.cpp
OLEMISC_DEP =\
e:\ole2\ifaceole.hpp\
e:\ole2\cpptypes.hpp\
etc.
It works fine, but compiles one file at a time. We would like to take advantage of multi core processors and compile more than one file at a time. I would appreciate some advice about the best way to make that happen, please. Here is what I have so far.
One: GNU make lets you execute parallel jobs using the --jobs=2 option for example and that works fine with GCC (we cant use GCC sadly). But Microsoft's NMAKE does not seem to support such an option. How compatible would the two name programs be, and if we did start using GNU MAKE, can you run two cl.exe processes at the same time? I would expect them to complain about the PDB (debug) file being locked, or does one of the newer cl.exe command line arguments get you around that?
Two: cl.exe has a /MP (build with multiple processes) flag, which lets you compile multiple files at the same time if passed together via the command line, for example:
cl /MP7 a.cpp b.cpp c.cpp d.cpp e.cpp
But using this would require changes to the makefile. Our make files are generated by a our own program from other files, so I can easily change what we put in the makefiles. But how do you combine the dependencies from different cpp files together in the makefile so they get compiled together via one cl.exe call? Each .obj is a different target with a set of commands to make it?
Or do I change the makefile to not call cl.exe, but rather some other little executable that we write, which then collects a series of .cpp files together and shells out to cl.exe passing multiple arguments? That would work and seems doable, but also seems overly complicated and I cant see anyone else doing that.
Am I missing something obvious? There must be a simpler way of accomplishing this?
We do not use Visual Studio or a solution file to do the compiles, because the list of files is extensive, we have a few special items in our makefiles, and theoretically do not want to be overly tied to MS C++ etc.
I thoroughly recommend GNU make on windows. I tend to use cygwin make as the environment it creates tends to be very portable to Unix-like platforms (Mac and Linux for a start). Compiling using the Microsoft toolchain, in parallel and with 100% accurate dependencies and CPU usage works very well. You have other requirements though.
As far as your nmake question goes, look up batch-mode inference rules in the manual. Basically, nmake is able to call the C compiler once, passing it a whole load of C files in one go. Thus you can use the compiler's /MP... type switches.
Parallel compiling built into the compiler? Pah! Horribly broken I say. Here is a skeleton anyway:
OBJECTS = a.obj b.obj c.obj
f.exe: $(OBJECTS)
link $** -o $#
$(OBJECTS): $$(#R).c
# "The only syntactical difference from the standard inference rule
# is that the batch-mode inference rule is terminated with a double colon (::)."
.c.obj::
cl -c /MP4 $<
EDIT
If each .obj has its own dependencies (likely!), then you simply add these as separate dependency lines (i.e., they don't have any shell commands attached).
a.obj: b.h c.h ../include/e.hpp
b.obj: b.h ../include/e.hpp
∶
Often such boiler plate is generated by another tool and !INCLUDEd into the main makefile. If you are clever, then you can generate these dependencies for free as you compile. (If you go this far, then nmake starts to creak at the seams and you should maybe change to GNU make.)
One further consideration to keep in mind here is this: You basically have to define one batch rule for each path and extension. But if you have two files with the same name in two different source directories with a batch inference rule for both of those directories, the batch rule might not pick the one you want.
Basically the make system knows it needs to make a certain obj file, and as soon as it finds an inference rule that lets it do that, it will use it.
The work around is to not have duplicate named files, and if that cant be avoided, dont use inference or batch rules for those files.
Ok, I spent some time this morning working on this, and thanks to bobbogo, I got it to work. Here are the exact details for anyone else who is considering this:
Old style makefile which compiles one file at a time has tons of this:
$(LINKPATH)\PS_zlib.obj : zlib\PS_zlib.cpp $(PS_ZLIB_DEP)
$(CC) zlib\PS_zlib.cpp
$(LINKPATH)\ioapi.obj : zlib\minizip\ioapi.c $(IOAPI_DEP)
$(CC) zlib\minizip\ioapi.c
$(LINKPATH)\iowin32.obj : zlib\minizip\iowin32.c $(IOWIN32_DEP)
$(CC) zlib\minizip\iowin32.c
Note that each file is compiled one at a time. So now you want to use the fancy Visual Studio 2010 /MP switch "/MP[n] use up to 'n' processes for compilation" to compile multiple files at the same time. How? Your makefile needs to make use of batch inference rules in nmake, as follows:
$(LINKPATH)\PS_zlib.obj : zlib\PS_zlib.cpp $(PS_ZLIB_DEP)
$(LINKPATH)\ioapi.obj : zlib\minizip\ioapi.c $(IOAPI_DEP)
$(LINKPATH)\iowin32.obj : zlib\minizip\iowin32.c $(IOWIN32_DEP)
#Batch inference rule for extension "cpp" and path "zlib":
{zlib}.cpp{$(LINKPATH)}.obj::
$(CC) $(CCMP) $<
#Batch inference rule for extension "c" and path "zlib\minizip":
{zlib\minizip}.c{$(LINKPATH)}.obj::
$(CC) $(CCMP) $<
In this case, elsewhere, we have
CCMP = /MP4
Note that nmake inference batch rules do not support wildcards or spaces in the paths. I found some decent nmake documentation somewhere that states that you need to create a separate rule for every extension and source file location, you can not have one rule if the files are in the different locations. Also, files that use #import can not be compiled with /MP.
We have a tool that generates our makefiles, so it now also also generates the batch inference rules.
But it works! The time to compile one large dll went from 12 minutes down to 7 minutes! Woohoo!

Resources