The following works for g++
assert(nullptr == 0);
I need to know if there is any implicit type conversion that is happening.
From what I know, nullptr can be compared with pointers only and not with integers, and also that it is more type-safe. Then why the comparison with integer works?
Then why the comparison with integer works?
Because, in most implementations, the nullptr is a 0 machine address. In other words (intptr_t)nullptr is 0. This is the case on Linux/x86-64 for example. Check by inspecting the generated assembler code obtained with g++ -S -O2 -fverbose-asm
I even believe that this is guaranteed by the C++ standard (read e.g. n3337)
However, if you compile your code with a recent GCC as gcc -Wall -Wextra you could get a warning.
Read also assert(3). In some cases (with NDEBUG) it is expanded to a no-op at compilation time.
Related
Sample code:
int *s;
int foo(void)
{
return 4;
}
int bar(void)
{
return __atomic_always_lock_free(foo(), s);
}
Invocations:
$ gcc t0.c -O3 -c
<nothing>
$ gcc t0.c -O0 -c
t0.c:10:10: error: non-constant argument 1 to '__atomic_always_lock_free'
Any ideas?
Relevant: https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html.
This doesn't seem surprising. The documentation you linked says that "size must resolve to a compile-time constant" and so it's to be expected that you might get an error when passing foo(). However, it's typical that if GCC is able to determine the value of an expression at compile time, then it will treat it as a compile-time constant, even if it doesn't meet the language's basic definition of a constant expression. This may be considered an extension and is explicitly allowed by the C17 standard at 6.6p10.
The optimization level is relevant to what the compiler tries in attempting to evaluate an expression at compile time. With optimizations off, it does little more than the basic constant folding that the standard requires (e.g. 2*4). With optimizations on, you get the benefit of its full constant propagation pass, as well as function inlining.
So in essence, under -O0, the compiler doesn't notice that foo() always returns the same value, because you've disabled the optimizations that would allow it to reach that conclusion. With -O3 it does and so it accepts it as a constant.
While developping a bare metal firmware in C for a RV32IM target (RISC-V), I encountered a linking error when LTO is enabled:
/home/duranda/riscv/lib/gcc/riscv64-unknown-elf/10.2.0/../../../../riscv64-unknown-elf/bin/ld: /tmp/firmware.elf.5cZNyC.ltrans0.ltrans.o: in function `.L0 ':
/home/duranda/whatever/firmware.c:493: undefined reference to `memset'
There are however no call to memset in my firmware. The memset is inserted by GCC during optimization as described here. The build is optimized for size using GCC -Os and -flto -fuse-linker-plugin flags. In addition, the -fno-builtin-memset -nostdinc -fno-tree-loop-distribute-patterns -nostdlib -ffreestanding flags are used to prevent the use of memset during optimization and to not include standard libs.
How to prevent memset insertion during LTO? Note that the firmware should not be linked against libc. I also tried providing a custom implementation of memset but the linker does not want to use it for memset inserted during optimization (still throws undefined reference).
I hit similar issue servers years ago and tried to fixed that, but it turns out I misunderstanding the meaning of -fno-builtin[1], -fno-builtin not guaranteed GCC won't call memcpy, memmove or memset implicitly.
I guess the simplest solution is, DO NOT compile your libc.c with -flto, or in another word, compile libc.c with -fno-lto.
That's my guess about what happen, I don't have know how to reproduce what you see, so it might incorrect,
During the first phase of LTO, LTO will collect any symbol you used in program
And then ask linker to provide those files, and discard any unused symbol.
Then read those files into GCC and optimize again, in this moment gcc using some built-in function to optimize or code gen, but it not pull-in before.
The symbol reference is created at LTO stage, which is too late pull in any symbol in current GCC LTO flow, and in this case, memset is discard in earlier stage...
So you might have question about why compile libc.c with -fno-lto will work? because if it didn't involved into LTO flow, which means it won't be discarded in the LTO flow.
Some sample program to show the gcc will call memset even you compile with -fno-builtin, aarch64 gcc and riscv gcc will generate a function call to memset.
// $ riscv64-unknown-elf-gcc x.c -o - -O3 -S -fno-builtin
struct bar {
int a[100];
};
struct bar y;
void foo(){
struct bar x = {{0}};
y = x;
}
Here is the corresponding gcc source code[2] for this case.
[1] https://gcc.gnu.org/pipermail/gcc-patches/2014-August/397382.html
[2] https://github.com/riscv/riscv-gcc/blob/riscv-gcc-10.2.0/gcc/expr.c#L3143
I'm not sure -fno-builtin-* does what you think it does. If you use those flags, then GCC will try to call an external function. If you don't use those flags, GCC will instead just insert inline code instead of relying on the library.
So it would appear to me you should simply not use any -fno-builtin flags.
Is it safe to assume that running g++ with
g++ -std=c++98 -std=c++11 ...
will compile using C++11? I haven't found an explicit confirmation in the documentation, but I see the -O flags behave this way.
The GCC manual doesn't state that the
last of any mutually exclusive -std=... options specified takes effect. The first occurrence
or the last occurrence are the only alternatives. There are numerous
GCC flags that take mutually exclusive alternative values from a finite set - mutually
exclusive, at least modulo the language of a translation unit. Let's call them mutex options for short.
It is a seemingly random rarity for it to be documented that the last setting takes effect. It is
documented for the -O options as you've noted, and in general terms for mutually exclusive warning options, perhaps
others. It's never documented that the first of multiple setting takes effect, because
it's never true.
The documentation leans - with imperfect consistency - on the historical conventions
of command usage in unix-likes OSes. If a command accepts a mutex option
then the last occurrence of the option takes effect. If the command were - unusually -
to act only on the first occurrence of the option then it would be a bug for
the command to accept subsequent occurrences at all: it should give a usage error.
This is custom and practice. The custom facilitates scripting with tools that
respect it, e.g. a script can invoke a tool passing a default setting of some
mutex option but enable the user to override that setting via a parameter of the script,
whose value can simply be appended to the default invocation.
In the absence of official GCC documentation to the effect you want, you might get
reassurance by attempting to find any GCC mutex option for which it is not
the case that the last occurrence takes effect. Here's one stab:
I'll compile and link this program:
main.cpp
#include <cstdio>
#if __cplusplus >= 201103L
static const char * str = "C++11";
#else
static const char * str = "Not C++11";
#endif
int main()
{
printf("%s\n%d\n",str,str); // Format `%d` for `str` mismatch
return 0;
}
with the commandline:
g++ -std=c++98 -std=c++11 -m32 -m64 -O0 -O1 -g3 -g0 \
-Wformat -Wno-format -o wrong -o right main.cpp
which requests contradictory option pairs:
-std=c++98 -std=c++11: Conform to C++98. Conform to C++11.
-m32 -m64: Produce 32-bit code. Produce 64-bit code.
-O0 -O1: Do not optimise at all. Optimize to level 1.
-g3 -g0: Emit maximum debugging info. Emit no debugging info.
-Wformat -Wno-format. Sanity-check printf arguments. Don't sanity check them.
-o wrong -o right. Output program wrong. Output program right
It builds successfully with no diagnostics:
$ echo "[$(g++ -std=c++98 -std=c++11 -m32 -m64 -O0 -O1 -g3 -g0 \
-Wformat -Wno-format -o wrong -o right main.cpp 2>&1)]"
[]
It outputs no program wrong:
$ ./wrong
bash: ./wrong: No such file or directory
It does output a program right:
$ ./right
C++11
-1713064076
which tells us it was compiled to C++11, not C++98.
The bug exposed by the garbage -1713064076 was not diagnosed because
-Wno-format, not -Wformat, took effect.
It is a 64-bit, not 32-bit executable:
$ file right
right: ELF 64-bit LSB shared object, x86-64 ...
It was optimized -O1, not -O0, because:
$ "[$(nm -C right | grep str)]"
[]
shows that the local symbol str is not in the symbol table.
And it contains no debugging information:
echo "[$(readelf --debug-dump right)]"
[]
as per -g0, not -g3.
Since GCC is open-source software, another way of resolving doubts
about its behaviour that is available to C programmers, at least,
is to inspect the relevant source code, available via git source-control at
https://github.com/gcc-mirror/gcc.
The relevant source code for your question is in file gcc/gcc/c-family/c-opts.c,
function,
/* Handle switch SCODE with argument ARG. VALUE is true, unless no-
form of an -f or -W option was given. Returns false if the switch was
invalid, true if valid. Use HANDLERS in recursive handle_option calls. */
bool
c_common_handle_option (size_t scode, const char *arg, int value,
int kind, location_t loc,
const struct cl_option_handlers *handlers);
It is essentially a simple switch ladder over option settings enumerated by scode - which
is OPT_std_c__11 for option -std=c++11 - and leaves no doubt that it
puts an -std option setting into effect regardless of what setting was in effect previously. You can look at branches other than master
(gcc-{5|6|7}-branch) with the same conclusion.
It's not uncommon to find GCC build system scripts that rely on the validity of
overriding an option setting by appending a new setting. Legalistically, this
is usually counting on undocumented behaviour, but there's a better
chance of Russia joining NATO than of GCC ceasing to take the last setting that
it parses for a mutex option.
I have a Fortran program that gives different results with -O0 and -O1 in 32bit systems. Tracking down the difference, I came up with the following test case (test.f90):
program test
implicit none
character foo
real*8 :: Fact,Final,Zeta,rKappa,Rnxyz,Zeta2
read(5,*) rKappa
read(5,*) Zeta
backspace(5)
read(5,*) Zeta2
read(5,*) Rnxyz
Fact=rKappa/Sqrt(Zeta**3)
write(6,'(ES50.40)') Fact*Rnxyz
Fact=rKappa/Sqrt(Zeta2**3)
Final = Fact*Rnxyz
write(6,'(ES50.40)') Final
end program test
with this data file:
4.1838698196228139E-013
20.148674000000000
-0.15444754236171612
The program should write exactly the same number. Note that Zeta2 is the same as Zeta, since the same number is read again (this is to prevent the compiler realizing they are the same number and hiding the problem). The only difference is that first an operation is done "on the fly" when writing, and then the result is saved in a variable and the variable is printed.
Now I compile with gfortran 4.8.4 (Ubuntu 14.04 version) and run it:
$ gfortran -O0 -m32 test.f90 && ./a.out < data
-7.1447898573566615177997578153994664188136E-16
-7.1447898573566615177997578153994664188136E-16
$ gfortran -O1 -m32 test.f90 && ./a.out < data
-7.1447898573566615177997578153994664188136E-16
-7.1447898573566605317236262891347096541529E-16
So, with -O0 the numbers are identical, with -O1 they are not.
I tried checking the optimized code with -fdump-tree-optimized:
final.10_53 = fact_44 * rnxyz.9_52;
D.1835 = final.10_53;
_gfortran_transfer_real_write (&dt_parm.5, &D.1835, 8);
[...]
final.10_63 = rnxyz.9_52 * fact_62;
final = final.10_63;
[...]
_gfortran_transfer_real_write (&dt_parm.6, &final, 8);
The only difference I see is that in one case the number printed is fact*rnxyz, and in the other it is rnxyz*fact. Can this change the result? From High Performance Mark's answer, I guess that might have to do with which variable goes to which register when. I also tried looking at the assembly output generated with -S, but I can't say I understand it.
And then, without the -m32 flag (on a 64bit machine), the numbers are also identical...
Edit: The numbers are identical if I add -ffloat-store or -mfpmath=sse -sse2 (see here, at the end). This makes sense, I guess, when I compile in an i686 machine, as the compiler would by default use 387 math. But when I compile in an x86-64 machine, with -m32, it shouldn't be needed according to the documentation:
-mfpmath=sse [...]
For the i386 compiler, you must use -march=cpu-type, -msse or -msse2 switches to enable SSE extensions and make this option effective. For the x86-64 compiler, these extensions are enabled by default.
[...]
This is the default choice for the x86-64 compiler.
Maybe -m32 makes these "defaults" ineffective? However, running gfortran -Q --help=target says mfpmath is 387 and msse2 is disabled...
Too long for a comment, but more of a suspicion than an answer. OP writes
The only difference is that first an operation is done "on the fly"
when writing, and then the result is saved in a variable and the
variable is printed.
which has me thinking about the x86_64 architecture's internal 80-bit f-p arithmetic. The precise results of a sequence of f-p arithmetic operations will be affected by when intermediate values are trimmed from 80- to 64-bits. And that's the kind of thing which may differ from one compiler optimisation level to another.
Note too that the differences between the two numbers printed by the O1 version of the code kick in at the 15th decimal digit, about the limits of precision available in 64-bit f-p arithmetic.
Some more fiddling around gives
1 01111001100 1001101111011110011111001110101101101100011000001110
as the IEEE-754 representation of
-7.1447898573566615177997578153994664188136E-16
and
1 01111001100 1001101111011110011111001110101101101100011000001101
as the IEEE-754 representation of
-7.1447898573566605317236262891347096541529E-16
The two numbers differ by 1 in their significands. It's possible that at O0 your compiler adheres to IEEE-754 rules for f-p arithmetic (those rules are strict about matters such as rounding at the low-order bits) but at O1 adheres only to Fortran's rather more relaxed view of arithmetic. (The Fortran standard does not require the use of IEEE-754 arithmetic.)
You may find a compiler option to enforce adherence to IEEE-754 rules at higher levels of optimisation. You may also find that that adherence costs you a measurable amount of run time.
What is the right flag or order of flags to disable treating particular warning as error in GCC? I want to do that for -Wimplicit-interface.
>cat test.f90
call s
end
> gfortran -c -Werror -Wimplicit-interface -Wno-error=implicit-interface test.f90 -o test.o
test.f90:1.7:
call s
1
Warning: Procedure 's' called with an implicit interface at (1)
>ls test*
test.f90
No test.o was generated.
Without -Werror it works
> gfortran -c -Wimplicit-interface -Wno-error=implicit-interface test.f90 -o test.o
test.f90:1.7:
call s
1
Warning: Procedure 's' called with an implicit interface at (1)
> ls test*
test.f90 test.o
GCC version is gcc version 4.9.2 20141030 (Cray Inc.) (GCC).
This is not an explicit answer to the question. I found it educative enough and too long to be put as comment.
As you just found, you might not be able to achieve what you want if you combine -Werror and -Wno-error=implicit-interface. Let me explain: as opposed to what we have in the doc, especially the following sentence,
The combined effect of positive and negative forms is that more specific options have priority over less specific ones, independently of their position in the command-line.
It seems that it is not the case in the actual implementation. I had a similar problem recently, and by googling, I found this which contains this sentence:
'-w' permanently sets all warnings off no matter what specific warning is set on
It actually suggests that by using some non specific options, the actual implementation does not allow you to change specific option included in the non specific one.
As #innoSPG points out, the actual behaviour does not conform to what is claimed in the manual.
The comment by #MarkGlisse revealed that this have changed with GCC 5. Therefore it was probably a bug.
The solution is therefore to use the recent version, or to not use one of the -Werror and -Wimplicit-interface.
Or to really provide explicit interfaces everywhere, but that can be problematic, as MPI libraries differ in the amount of explicit interfaces provided in the mpi modules.