When compiling this:
constexpr double x {123.0};
constexpr double y = x / 0.0;
std::cout << x << " / 0 = " << y << "\n";
The compiler (gcc 4.9.2, -std=c++11 or c++14) fails, giving error:
(1.23e+2 / 0.0)' is not a constant expression
constexpr double y = x / 0.0;
How is the result (Inf) relevant when deciding if y can be a constexpr or not?
For reference, this seems to be the way to do it:
static constexpr double z = std::numeric_limits<double>::quiet_NaN();
static constexpr double w = std::numeric_limits<double>::infinity();
Infinity is an implementation defined result, the standard does not require IEEE floating point and division by zero is formally undefined behavior and constant expression have an exclusion for undefined behavior.
From the draft C++ standard section 5.6 [expr.mul]:
The binary / operator yields the quotient, and the binary % operator
yields the remainder from the division of the first expression by the
second. If the second operand of / or % is zero the behavior is
undefined.
Related
I'm wondering why the integer ii is initiallized at compile time, but not the float ff here:
int main() {
const int i = 1;
constexpr int ii = i;
const float f = 1.0;
constexpr float ff = f;
}
This is what happens when I try to compile:
> g++ -std=c++11 test.cc
test.cc: In function ‘int main()’:
test.cc:6:24: error: the value of ‘f’ is not usable in a constant expression
constexpr float ff = f;
^
test.cc:5:15: note: ‘f’ was not declared ‘constexpr’
const float f = 1.0;
Constant variables of integral types with constant initializers are integral constant expressions (de facto implicitely constexpr; see expr.const in ISO C++). float is not an integral type and does not meet the requirements for constant expression without the use of constexpr. (A similar case is why int can be but float cannot be a template parameter.)
In C++ constant integers are treated differently than other constant types. If they are initialized with a compile-time constant expression they can be used in a compile time expression. This was done so that array size could be a const int instead of #defined (like you were forced in C):
(Assume no VLA extensions)
const int s = 10;
int a[s]; // OK in C++
I have defined a constexpr function as following:
constexpr int foo(int i)
{
return i*2;
}
And this is what in the main function:
int main()
{
int i = 2;
cout << foo(i) << endl;
int arr[foo(i)];
for (int j = 0; j < foo(i); j++)
arr[j] = j;
for (int j = 0; j < foo(i); j++)
cout << arr[j] << " ";
cout << endl;
return 0;
}
The program was compiled under OS X 10.8 with command clang++. I was surprised that the compiler did not produce any error message about foo(i) not being a constant expression, and the compiled program actually worked fine. Why?
The definition of constexpr functions in C++ is such that the function is guaranteed to be able to produce a constant expression when called such that only constant expressions are used in the evaluation. Whether the evaluation happens during compile-time or at run-time if the result isn't use in a constexpr isn't specified, though (see also this answer). When passing non-constant expressions to a constexpr you may not get a constant expression.
Your above code should, however, not compile because i is not a constant expression which is clearly used by foo() to produce a result and it is then used as an array dimension. It seems clang implements C-style variable length arrays as it produces the following warning for me:
warning: variable length arrays are a C99 feature [-Wvla-extension]
A better test to see if something is, indeed, a constant expression is to use it to initialize the value of a constexpr, e.g.:
constexpr int j = foo(i);
I used the code at the top (with "using namespace std;" added in) and had no errors when compiling using "g++ -std=c++11 code.cc" (see below for a references that qualifies this code) Here is the code and output:
#include <iostream>
using namespace std;
constexpr int foo(int i)
{
return i*2;
}
int main()
{
int i = 2;
cout << foo(i) << endl;
int arr[foo(i)];
for (int j = 0; j < foo(i); j++)
arr[j] = j;
for (int j = 0; j < foo(i); j++)
cout << arr[j] << " ";
cout << endl;
return 0;
}
output:
4
0 1 2 3
Now consider reference https://msdn.microsoft.com/en-us/library/dn956974.aspx It states: "...A constexpr function is one whose return value can be computed at compile when consuming code requires it. A constexpr function must accept and return only literal types. When its arguments are constexpr values, and consuming code requires the return value at compile time, for example to initialize a constexpr variable or provide a non-type template argument, it produces a compile-time constant. When called with non-constexpr arguments, or when its value is not required at compile-time, it produces a value at run time like a regular function. (This dual behavior saves you from having to write constexpr and non-constexpr versions of the same function.)"
It gives as valid example:
constexpr float exp(float x, int n)
{
return n == 0 ? 1 :
n % 2 == 0 ? exp(x * x, n / 2) :
exp(x * x, (n - 1) / 2) * x;
}
This is an old question, but it's the first result on a google search for the VS error message "constexpr function return is non-constant". And while it doesn't help my situation, I thought I'd put my two cents in...
While Dietmar gives a good explanation of constexpr, and although the error should be caught straight away (as it is with the -pedantic flag) - this code looks like its suffering from some compiler optimization.
The value i is being set to 2, and for the duration of the program i never changes. The compiler probably noticed this and optimized the variable to be a constant (just replacing all references to variable i to the constant 2... before applying that parameter to the function), thus creating a constexpr call to foo().
I bet if you looked at the disassembly you'd see that calls to foo(i) were replaced with the constant value 4 - since that is the only possible return value for a call to this function during execution of the program.
Using the -pedantic flag forces the compiler to analyze the program from the strictest point of view (probably done before any optimizations) and thus catches the error.
Given a vector of reals c and a vector of integers rw, I want to create a vector z with elements z_i=c_i^rw_i. I tried to do this using the component-wise function pow, but I get a compiler error.
#include <Eigen/Core>
typedef Eigen::VectorXd RealVector;
typedef Eigen::VectorXi IntVector; // dynamically-sized vector of integers
RealVector c; c << 2, 3, 4, 5;
IntVector rw; rw << 6, 7, 8, 9;
RealVector z = c.pow(rw); **compile error**
The compiler error is
error C2664: 'const Eigen::MatrixComplexPowerReturnValue<Derived> Eigen::MatrixBase<Derived>::pow(const std::complex<double> &) const': cannot convert argument 1 from 'IntVector' to 'const double &'
with
[
Derived=Eigen::Matrix<double,-1,1,0,-1,1>
]
c:\auc\sedanal\LammSolve.h(117): note: Reason: cannot convert from 'IntVector' to 'const double'
c:\auc\sedanal\LammSolve.h(117): note: No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called
What is wrong with this code? And, assuming it can be fixed, how would I do the same operation when c is a real matrix instead of a vector, to compute c_ij^b_i for all elements of c?
Compiler is Visual Studio 2015, running under 64-bit Windows 7.
First of all, MatrixBase::pow is a function that computes the matrix power of a square matrix (if the matrix has an eigenvalue decomposition, it is the same matrix, but with the eigenvalues raised to the given power).
What you want is an element-wise power, which since there is no cwisePow function in MatrixBase, requires switching to the Array-domain. Furthermore, there is no integer-specialization for the powers (this could be efficient, but only up to a certain threshold -- and checking for that threshold for every element would waste computation time), so you need to cast the exponents to the type of your matrix.
To also answer your bonus-question:
#include <iostream>
#include <Eigen/Core>
int main(int argc, char **argv) {
Eigen::MatrixXd A; A.setRandom(3,4);
Eigen::VectorXi b = (Eigen::VectorXd::Random(3)*16).cast<int>();
Eigen::MatrixXd C = A.array() // go to array domain
.pow( // element-wise power
b.cast<double>() // cast exponents to double
.replicate(1, A.cols()).array() // repeat exponents to match size of A
);
std::cout << A << '\n' << b << '\n' << C << '\n';
}
Essentially, this will call C(i,j) = std::pow(A(i,j), b(i)) for each i, j. If all your exponents are small, you might actually be faster than that with a
simple nested loop that calls a specialized pow(double, int) implementation (like gcc's __builtin_powi), but you should benchmark that with actual data.
While checking the overflow for short and char data type for add operation, the assertions inserted by Frama-C are seems to be incorrect:
For char and short data the maximum positive and negative values are of integer data type.
What could be the reason for this?
Integral types of rank less than int are converted to either int or unsigned when used in an arithmetic operation (see C11 6.3.1.8 Usual arithmetic conversions). This is why you see the cast to (int) for x and y. Note that by default -rte will not emit warning for downcasts, as they are not undefined behavior (6.3.1.3§3 indicates that signed downcasts are implementation defined and that an implementation may raise a signal). If you add the option -warn-signed-downcast, you'll see the assertions you were probably looking for, which are due to the cast into (char) of the result:
/*# assert rte: signed_downcast: (int)x+(int)y ≤ 127; */
/*# assert rte: signed_downcast: -128 ≤ (int)x+(int)y; */
Note that if you store the result into an int, as in
void main(void) {
char x;
char y;
int z;
x = 1;
y = 127;
z = x + y;
return;
}
There won't be any downcast warning (but the signed overflow warnings will be present).
The compiler I use is g++ (Ubuntu 4.8.4-2ubuntu1~14.04) 4.8.4.
I compile my programs with the following command:
g++ -std=c++11 -pedantic -Wall program.cpp
The program no. 1.:
#include <iostream>
using namespace std;
int main() {
unsigned int b;
b = -54;
cout << b << endl;
return 0;
}
The program prints 4294967242 and this is the value I expected, because this is the case when we assign an out-of-range value to a variable of unsigned type, so the result is the remainder of a modulo division.
The program no. 2.:
#include <iostream>
using namespace std;
int main() {
unsigned int b;
b = 54.1234;
cout << b << endl;
return 0;
}
The program prints 54, and this is also OK, because the stored value is the part before the decimal point, and the franctional part is truncated.
The program no. 3.:
#include <iostream>
using namespace std;
int main() {
unsigned int b;
b = -54.1234;
cout << b << endl;
return 0;
}
Here during compilation I get the warning "overflow in implicit constant conversion".
And the program prints 0. Why is it so? I thought that it will do the truncation of the fractional part (as in program 2) and then store the result of the modulo division (as in program 1).
But if I write program no. 4.:
program no. 4.
#include <iostream>
using namespace std;
int main() {
unsigned int b;
float k = -54.1234;
b = k;
cout << b << endl;
return 0;
}
then I get no warning, and I get the result (expected by me) 4294967242, which is the result of the modulo division.
I would be grateful if somebody can explain it to me.
Why doesn't the program no. 3 behave like program no. 4? Why don't I get a warning when compiling program no. 1, but I get one when compiling program no. 3.?
According to the standard (§[conv.fpint]).
A prvalue of a floating point type can be converted to a prvalue of an integer type. The conversion truncates; that is, the fractional part is discarded. The behavior is undefined if the truncated value cannot be represented in the destination type.
So, your -54.1234 is truncated to -54. Since that can't be represented in an unsigned, you get undefined behavior.
When converting floating point numbers to integers, C and C++ round floating point numbers towards zero. The rounded result must then be representable in the destination type.
As a result, for 32 bit unsigned int the conversion is guaranteed to give the correct result if -1 < x < 2^32. For smaller numbers there are no guarantees. Since numbers between -1 and 0 must be rounded to zero, and numbers -1 and smaller have no requirements, it wouldn't be surprising if the compiler checks whether x < 0 and gives a result of 0 in that case. (The compiler might check whether x < 1 and give a result of 0; this handles very small positive numbers as well).