Couldn't find any documentation on -Wno-four-char-constants, however I suspect that it is similar to -Wno-multichar. Am I correct?
They're related but not the same thing.
Compiling with the -Wall --pedantic flags, the assignment:
int i = 'abc';
produces:
warning: multi-character character constant [-Wmultichar]
with both GCC and CLANG, while:
int i = 'abcd';
produces:
GCC warning: multi-character character constant [-Wmultichar]
CLANG warning: multi-character character constant [-Wfour-char-constants]
The standard (C99 standard with corrigenda TC1, TC2 and TC3 included, subsection 6.4.4.4 - character constants) states that:
The value of an integer character constant containing more than one character (e.g., 'ab'), [...] is implementation-defined.
A multi-char always resolves to an int but, since the order in which the characters are packed into one int is not specified, portable use of multi-character constants is difficult (the exact value is implementation-dependent).
Also compilers differ in how they handle incomplete multi-chars (such as 'abc').
Some compilers pad on the left, some on the right, regardless of endian-ness (some compilers may not pad at all).
Someone who can accept the portability problems of a complete multi-char may anyway want a warning for an incomplete one (-Wmultichar -Wno-four-char-constants).
Related
Sample code:
int *s;
int foo(void)
{
return 4;
}
int bar(void)
{
return __atomic_always_lock_free(foo(), s);
}
Invocations:
$ gcc t0.c -O3 -c
<nothing>
$ gcc t0.c -O0 -c
t0.c:10:10: error: non-constant argument 1 to '__atomic_always_lock_free'
Any ideas?
Relevant: https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html.
This doesn't seem surprising. The documentation you linked says that "size must resolve to a compile-time constant" and so it's to be expected that you might get an error when passing foo(). However, it's typical that if GCC is able to determine the value of an expression at compile time, then it will treat it as a compile-time constant, even if it doesn't meet the language's basic definition of a constant expression. This may be considered an extension and is explicitly allowed by the C17 standard at 6.6p10.
The optimization level is relevant to what the compiler tries in attempting to evaluate an expression at compile time. With optimizations off, it does little more than the basic constant folding that the standard requires (e.g. 2*4). With optimizations on, you get the benefit of its full constant propagation pass, as well as function inlining.
So in essence, under -O0, the compiler doesn't notice that foo() always returns the same value, because you've disabled the optimizations that would allow it to reach that conclusion. With -O3 it does and so it accepts it as a constant.
The following works for g++
assert(nullptr == 0);
I need to know if there is any implicit type conversion that is happening.
From what I know, nullptr can be compared with pointers only and not with integers, and also that it is more type-safe. Then why the comparison with integer works?
Then why the comparison with integer works?
Because, in most implementations, the nullptr is a 0 machine address. In other words (intptr_t)nullptr is 0. This is the case on Linux/x86-64 for example. Check by inspecting the generated assembler code obtained with g++ -S -O2 -fverbose-asm
I even believe that this is guaranteed by the C++ standard (read e.g. n3337)
However, if you compile your code with a recent GCC as gcc -Wall -Wextra you could get a warning.
Read also assert(3). In some cases (with NDEBUG) it is expanded to a no-op at compilation time.
Doing some encoding tests, I saved a c-file with encoding 'UTF-16 LE' (using sublimeText).
The c file contains the following:
#include <stdio.h>
void main() {
char* letter = "é";
printf("%s\n", letter);
}
Compiling this file with gcc returns the error:
test.c:1:3: error: invalid preprocessing directive #i; did you mean #if?
1 | # i n c l u d e < s t d i o . h >
It's as if gcc inserted a space before each character when reading the c-file.
My question is: Can we submit c-files encoded in some format other than "utf-8" ? Why it was not possible for gcc to detect the encoding of my file and read it properly ?
Because design choice.
From GNU Manual, Character-sets:
At present, GNU CPP does not implement conversion from arbitrary file encodings to the source character set. Use of any encoding other than plain ASCII or UTF-8, except in comments, will cause errors. Use of encodings that are not strict supersets of ASCII, such as Shift JIS, may cause errors even if non-ASCII characters appear only in comments. We plan to fix this in the near future.
GCC is born to create GNU, so from Unix world, where UTF16 is not an allowed character set (for standard files, and GNU pass sources files between different programs, e.g. CPP the preprocessor, GCC the compiler, etc.).
But also, who uses UTF16 for sources? And for C, which hates all the \0 in strings? The encoding of source code has nothing to do with the program (and do default locales for reading files, printing strings, etc.).
If it cause problem, just use a pre-preprocessor (which is not so uncommon), to change your source code in gcc useable code (but hidden to you, so you can continue edit in UTF16).
I have a Fortran program that gives different results with -O0 and -O1 in 32bit systems. Tracking down the difference, I came up with the following test case (test.f90):
program test
implicit none
character foo
real*8 :: Fact,Final,Zeta,rKappa,Rnxyz,Zeta2
read(5,*) rKappa
read(5,*) Zeta
backspace(5)
read(5,*) Zeta2
read(5,*) Rnxyz
Fact=rKappa/Sqrt(Zeta**3)
write(6,'(ES50.40)') Fact*Rnxyz
Fact=rKappa/Sqrt(Zeta2**3)
Final = Fact*Rnxyz
write(6,'(ES50.40)') Final
end program test
with this data file:
4.1838698196228139E-013
20.148674000000000
-0.15444754236171612
The program should write exactly the same number. Note that Zeta2 is the same as Zeta, since the same number is read again (this is to prevent the compiler realizing they are the same number and hiding the problem). The only difference is that first an operation is done "on the fly" when writing, and then the result is saved in a variable and the variable is printed.
Now I compile with gfortran 4.8.4 (Ubuntu 14.04 version) and run it:
$ gfortran -O0 -m32 test.f90 && ./a.out < data
-7.1447898573566615177997578153994664188136E-16
-7.1447898573566615177997578153994664188136E-16
$ gfortran -O1 -m32 test.f90 && ./a.out < data
-7.1447898573566615177997578153994664188136E-16
-7.1447898573566605317236262891347096541529E-16
So, with -O0 the numbers are identical, with -O1 they are not.
I tried checking the optimized code with -fdump-tree-optimized:
final.10_53 = fact_44 * rnxyz.9_52;
D.1835 = final.10_53;
_gfortran_transfer_real_write (&dt_parm.5, &D.1835, 8);
[...]
final.10_63 = rnxyz.9_52 * fact_62;
final = final.10_63;
[...]
_gfortran_transfer_real_write (&dt_parm.6, &final, 8);
The only difference I see is that in one case the number printed is fact*rnxyz, and in the other it is rnxyz*fact. Can this change the result? From High Performance Mark's answer, I guess that might have to do with which variable goes to which register when. I also tried looking at the assembly output generated with -S, but I can't say I understand it.
And then, without the -m32 flag (on a 64bit machine), the numbers are also identical...
Edit: The numbers are identical if I add -ffloat-store or -mfpmath=sse -sse2 (see here, at the end). This makes sense, I guess, when I compile in an i686 machine, as the compiler would by default use 387 math. But when I compile in an x86-64 machine, with -m32, it shouldn't be needed according to the documentation:
-mfpmath=sse [...]
For the i386 compiler, you must use -march=cpu-type, -msse or -msse2 switches to enable SSE extensions and make this option effective. For the x86-64 compiler, these extensions are enabled by default.
[...]
This is the default choice for the x86-64 compiler.
Maybe -m32 makes these "defaults" ineffective? However, running gfortran -Q --help=target says mfpmath is 387 and msse2 is disabled...
Too long for a comment, but more of a suspicion than an answer. OP writes
The only difference is that first an operation is done "on the fly"
when writing, and then the result is saved in a variable and the
variable is printed.
which has me thinking about the x86_64 architecture's internal 80-bit f-p arithmetic. The precise results of a sequence of f-p arithmetic operations will be affected by when intermediate values are trimmed from 80- to 64-bits. And that's the kind of thing which may differ from one compiler optimisation level to another.
Note too that the differences between the two numbers printed by the O1 version of the code kick in at the 15th decimal digit, about the limits of precision available in 64-bit f-p arithmetic.
Some more fiddling around gives
1 01111001100 1001101111011110011111001110101101101100011000001110
as the IEEE-754 representation of
-7.1447898573566615177997578153994664188136E-16
and
1 01111001100 1001101111011110011111001110101101101100011000001101
as the IEEE-754 representation of
-7.1447898573566605317236262891347096541529E-16
The two numbers differ by 1 in their significands. It's possible that at O0 your compiler adheres to IEEE-754 rules for f-p arithmetic (those rules are strict about matters such as rounding at the low-order bits) but at O1 adheres only to Fortran's rather more relaxed view of arithmetic. (The Fortran standard does not require the use of IEEE-754 arithmetic.)
You may find a compiler option to enforce adherence to IEEE-754 rules at higher levels of optimisation. You may also find that that adherence costs you a measurable amount of run time.
C99 standard says:
A double argument representing an infinity is converted in one of the
styles [-]inf or [-]infinity -- which style is implemented is
implementation-defined. (p.278 section 7.19.6.1)
Unfortunately on Windows:
printf("%f\n", 1.0f/0.0f)
produces: 1.#INF00
This is a problem because some applications expect C standard compliant strings as input (also C#'s Double.Parse works for "Infinity" but not for "1.#INF00", curiously "infinity" is not ok either at least when I tried it with Mono).
My question is how do I force printf under Windows to output "inf" or "infinity" instead of 1.#INF00 ?
(I am compiling with MinGW gcc 4.8.2)
You can choose between the MSVC (default) and mingw version of the printf-functions.
Just define __USE_MINGW_ANSI_STDIO like this, and the output should be C99 compliant:
#define __USE_MINGW_ANSI_STDIO 1
Some documentation here and here.