c++11 chrono conditional statements - c++11

Would somebody please describe the following code ?
template<typename _Rep2, typename = typename
enable_if<is_convertible<_Rep2, rep>::value
&& (treat_as_floating_point<rep>::value
|| !treat_as_floating_point<_Rep2>::value)>::type>
constexpr explicit duration(const _Rep2& __rep)
: __r(static_cast<rep>(__rep)) { }
template<typename _Rep2, typename _Period2, typename = typename
enable_if<treat_as_floating_point<rep>::value
|| (ratio_divide<_Period2, period>::den == 1
&& !treat_as_floating_point<_Rep2>::value)>::type>
constexpr duration(const duration<_Rep2, _Period2>& __d)
: __r(duration_cast<duration>(__d).count()) { }

These are the gcc/libstdc++ implementation of the std::chrono::duration constructors. We can look at them one at a time:
template <typename _Rep2,
typename = typename enable_if
<
is_convertible<_Rep2, rep>::value &&
(treat_as_floating_point<rep>::value ||
!treat_as_floating_point<_Rep2>::value)
>::type>
constexpr
explicit
duration(const _Rep2& __rep)
: __r(static_cast<rep>(__rep))
{ }
Formatting helps readability. It doesn't really matter what the style is, as long as it has some. ;-)
This first constructor is constexpr and explicit, meaning if the inputs are compile-time constants, the constructed duration can be a compile-time constant, and the input won't implicitly convert to the duration.
The overall purpose of this constructor is to explicitly convert a scalar (or emulation of a scalar) into a chrono::duration.
The second typename in the template argument list is a constraint on _Rep2. It says:
_Rep2 must be implicitly convertible to rep (rep is the representation type of the duration), and
Either rep is a floating point type (or emulating a floating point type), or _Rep2 is not a floating point type (or emulation of one).
If these constraints are not met, this constructor literally does not exist. The effect of these constraints is that you can construct floating-point-based durations from floating-point and integral arguments, but integral-based durations must be constructed from integral arguments.
The rationale for this constraint is to prevent silently discarding the fractional part of floating-point arguments. For example:
minutes m{1.5}; // compile-time error
This will not compile because minutes is integral based, and the argument is floating point, and if it did compile, it would silently discard the .5 resulting in 1min.
Now for the second chrono::duration constructor:
template <typename _Rep2,
typename _Period2,
typename = typename enable_if
<
treat_as_floating_point<rep>::value ||
(ratio_divide<_Period2, period>::den == 1 &&
!treat_as_floating_point<_Rep2>::value)
>::type>
constexpr
duration(const duration<_Rep2, _Period2>& __d)
: __r(duration_cast<duration>(__d).count())
{ }
This constructor serves as a converting chrono::duration constructor. That is, it converts one unit into another (e.g. hours to minutes).
Again there is a constraint on the template arguments Rep2 and Period2. If these constraints are not met, the constructor does not exist. The constraints are:
rep is floating-point, or
_Period2 / period results in a ratio with a denominator of 1 and _Rep2 is an integral type (or emulation thereof).
The effect of this constraint is that if you have a floating-point duration, then any other duration (integral or floating-point-based) will implicitly convert to it.
However integral-based durations are much more picky. If you are converting to an integral-based duration, then the source duration can not be floating-point-based and the conversion from the source integral-based duration to the destination integral-based duration must be exact. That is, the conversion must not divide by any number except 1 (only multiply).
For example:
hours h = 30min; // will not compile
minutes m = 1h; // ok
The first example does not compile because it would require division by 60, resulting in h which is not equal to 30min. But the second example compiles because m will exactly equal 1h (it will hold 60min).
What you can take away from this:
Always let <chrono> do conversions for you. If you are multiplying or dividing by 60 or 1000 (or whatever) in your code, you are needlessly introducing the possibility of errors. Furthermore <chrono> will let you know if you have any lossy conversions if you delegate all of your conversions to <chrono>.
Use implicit <chrono> conversions as much as possible. They will either compile and be exact, or they won't compile. If they don't compile, that means you are asking for a conversion that involves truncation error. It is ok to ask for truncation error, as long as you don't do so accidentally. The syntax for asking for a truncating conversion is:
hours h = duration_cast<hours>(30min); // ok, h == 0h

Related

Why converting the same value in two ways, the results are different?

Two kinds of conversion of the same constant to float64 return the same value, but when I try to convert these new values to int, the results are different.
...
const Big = 92233720368547758074444444
func needFloat(x float64) float64 {
return x
}
func main() {
fmt.Println(needFloat(Big))
fmt.Println(float64(Big))
fmt.Println(int(needFloat(Big)))
fmt.Println(int(float64(Big)))
}
I'd expect the two first Println return the same type of value
fmt.Println(needFloat(Big)) // 9.223372036854776e+25
fmt.Println(float64(Big)) // 9.223372036854776e+25
so when I convert them to int, I expect the same output, but:
fmt.Println(int(needFloat(Big))) // -2147483648
fmt.Println(int(float64(Big))) // constant 92233720368547758080000000 overflows int
If your real question is why one attempt to convert to int produces a compile-time error message, but the other produces a very negative integer, it's because one is a compile-time conversion, and the other is a runtime conversion. I think it helps in these cases to be explicit about what you are expecting, and what can be run and what can't. Here's a Go Playground version of your code, where the last conversion is commented out. The reason for commenting it out is of course that it doesn't compile.
As Adrian noted in a comment, Big is a constant, specifically an untyped one. As Uvelichitel answered, a constant x (of any type) can be converted to a new and different type T if and only if
x is representable by a value of type T.
(The quote part is from the section Uvelichitel linked, except that mine adds the inner link for the word "representable".)
The expression float64(Big) is an explicit type conversion, with a constant as its x, so the result is a float64-typed constant with the given value. So far, that's fine: now we have 92233720368547758074444444 as a float64. This chops off some of the digits: the actual internal representation is 92233720368547758080000000 (see variant with %f directives). The low digits, ...74444444, have been rounded to ...80000000. See the link for "representable" for why the rounding occurs.
The expression int(float64(Big)) is an outer explicit type conversion surrounding an inner explicit type conversion. We already know what the inner type conversion does: it produces the float64 constant 92233720368547758080000000.0. The outer conversion tries to represent this new value as int, but it does not fit, producing an error:
./prog.go:18:17: constant 92233720368547758080000000 overflows int
if the commented-out line is uncommented. Note again that the value has been rounded, due to the inner conversion.
On the other hand, needFloat(Big) is a function call. Calling the function assigns the untyped constant to its argument (a float64) and obtains its return value (the same float64, value 92233720368547758080000000.0. Printing that prints what you'd expect, given the default or explicit formatting directive. The returned value is not a constant.
Similarly, int(needFloat(Big)) calls needFloat, which returns the same float64 value—not a constant—as before. The int explicit type conversion tries to convert this value to int at runtime, rather than at compile time. For such conversions between numeric types, there is a list of three explicit rules at https://golang.org/ref/spec#Conversions, plus a final caveat. Here, rule 2 applies: any fractional part is discarded. But the caveat also applies:
In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent.
In other words, there is no runtime error, but the int value you get—which in this case was -2147483648, which is the smallest allowed 32-bit integer—is up to the implementation. This particular implementation chose to use this particular negative number as its result. Another implementation might choose some other number. (Interestingly, in the playground, if I convert directly to uint I get zero. If I convert to int, then to uint, I get the 0x80000000 I expected.)
Hence, the key difference in terms of whether you get an error is whether you do the conversion at compile time, via constants, or at runtime, via runtime conversion.
int(float64(Big)) //illegal because
A constant value x can be converted to type T if x is representable by
a value of T
int(needFloat(Big)) //is non-constant expression because of function call
A non-constant value x can be converted to type T in any of these
cases:
- x's type and T are both integer or floating point types.
https://golang.org/ref/spec#Conversions

Why can't std::is_permutation act between two different types of data?

Suppose I have a vector of integers and of strings, and I want to compare whether they have equivalent elements, without consideration of order. Ultimately, I'm asking if the integer vector is a permutation of the string vector (or vice versa). I'd like to be able to just call is_permutation, specify a binary predicate that allows me to compare the two, and move on with my life. eg:
bool checkIntStringComparison( const std::vector<int>& intVec,
const std::vector<std::string>& stringVec,
const std::map<int, std::string>& intStringMap){
return std::is_permutation<std::vector<int>::const_iterator, std::vector<std::string>::const_iterator>(
intVec.cbegin(), intVec.cend(), stringVec.cbegin(), [&intStringMap](const int& i, const std::string& string){
return string == intStringMap.at(i);
});
}
But trying to compile this (in gcc) returns an error message that boils down to:
no match for call to stuff::< lambda(const int&, const string& >)(const std::_cxx11::basic_string&, const int&)
see how it switches the calling signature from the lambda's? If I switch them around, the signature switches itself the other way.
Digging around about this error, it seems that the standard specifies for std::is_permutation that ForwardIterator1 and 2 must be the same type. So I understand the compiler error in that regard. But why should it be this way? If I provide a binary predicate that allows me to compare the two (or if we had previously defined some equality operator between the two?), isn't the real core of the algorithm just searching through container 1 to make sure all its elements are in container 2 uniquely?
The problem is that an element can occur more than once. That means that the predicate needs to be able to not only compare the elements of the first range to the elements of the second range, but to compare the elements of the first range to themselves:
if (size(range1) != size(range2))
return false;
for (auto const& x1 : range1)
if (count_if(range1, [&](auto const& y1) { return pred(x1, y1); }) !=
count_if(range2, [&](auto const& y2) { return pred(x1, y2); }))
return false;
return true;
Since it's relatively tricky to create a function object that takes two distinct signatures, and passing two predicates would be confusing, the easiest option was to specify that both ranges must have the same value type.
Your options are:
Wrap one range (or both) in a transform that gives the same value type (e.g. use Boost.Adaptors.Transformed);
Write your own implementation of std::is_permutation (e.g. copying the example implementation on cppreference);
Actually, note that the gcc (i.e. libstdc++) implementation does not enforce that the value types are the same; it just requires several signatures which you'd have to provide anyway, so write a polymorphic predicate as e.g. a function object or a polymorphic lambda, or with parameter types convertible from both range value types (e.g. in your case boost::variant<int, string> - ugly, but probably not that bad). This is non-portable, as another implementation might choose to enforce that requirement.

Can evaluation of functions happen during compile time?

Consider the below function,
public static int foo(int x){
return x + 5;
}
Now, let us call it,
int in = /*Input taken from the user*/;
int x = foo(10); // ... (1)
int y = foo(in); // ... (2)
Here, can the compiler change
int x = foo(10); // ... (1)
to
int x = 15; // ... (1)
by evaluating the function call during compile time since the input to the function is available during compile time ?
I understand this is not possible during the call marked (2) because the input is available only during run time.
I do not want to know a way of doing it in any specific language. I would like to know why this can or can not be a feature of a compiler itself.
C++ does have a method for this:
Have a read up on the 'constexpr' keyword in C++11, it allows compile time evaluation of functions.
They have a limitation: the function must be a return statement (not multiple lines of code), but can call other constexpr functions (C++14 does not have this limitation AFAIK).
static constexpr int foo(int x){
return x + 5;
}
EDIT:
Why a compiler might not evaluate a function (just my guess):
It might not be appropriate to remove a function by evaluating it without being told.
The function could be used in different compilation units, and with static/dynamic inputs: thus evaluating it in some circumstances and adding a call in other places.
This use would provide inconsistent execution times (especially on a deterministic platform like AVR) where timing may be important, or at least need to be predictable.
Also interrupts (and how the compiler interacts with them) may come into play here.
EDIT:
constexpr is actually stronger -- it requires that the compiler do this. The compiler is free to fold away functions without constexpr, but the programmer can't rely on it doing so.
Can you give an example in the case where the user would have benefited from this but the compiler chose not to do it ?
inline functions may, or may not resolve to constant expressions which could be optimized into the end result.
However, a constexpr guarantees it. An inline function cannot be used as a compile time constant whereas constexpr can allow you to formulate compile time functions and more so, objects.
A basic example where constexpr makes a guarantee that inline cannot.
constexpr int foo( int a, int b, int c ){
return a+b+c;
}
int array[ foo(1, 2, 3) ];
And the same as a simple object.
struct Foo{
constexpr Foo( int a, int b, int c ) : val(a+b+c){}
int val;
};
constexpr Foo foo( 1,2,4 );
int array[ foo.val ];
Unless foo.val is a compile time constant, the code above will not compile.
Even as just a function, an inline function has no guarantee. And the linker can also do inlining over multiple compilation units, after the syntax has been compiled (array bounds checked for integer constants).
This is kind of like meta-programming, but without the templates. Of course these examples do not do the topic justice, however very complex solutions would benefit from the ability to use objects and functional programming to achieve a result.
Yes, evaluation can happen during compile time. This comes under the heading of constant folding and function inlining, both of which are common optimizations for optimizing compilers.
Many languages do not have strong distinction between "compile time" and "run time", but the general rule is that the language defines an "execution model" which defines the behavior of any particular program with any particular input (or specifies that it is undefined). The compiler must produce an executable that can read any input and produce the corresponding output as defined by the execution model. What happens inside the executable doesn't matter -- as long as the externally viewed behavior is correct.
Here "input", "output" and "behavior" includes all possible interactions with the environment that are defined in the execution model, including timing effects.

Float multiples Vector in Ruby

Assume I have implemented a Vector class. In C++ it is possible to do "scaling" in natural math expressions by overloading operator* at global scope:
template <typename T> // T can be int, double, complex<>, etc.
Vector operator*(const T& t, const Vector& v);
template <typename T> // T can be int, double, complex<>, etc.
Vector operator*(const Vector& v, const T& t);
However, when it goes to Ruby, since parameters are not typed, it would be possible to write
class Vector
def *(another)
case another
when Vector then ...
when Numeric then ...
end
end
end
This allows Vector * Numeric, but not Numeric * Vector. Is there a way of solve it?
[Using Numeric rather than Numerical in my reply.]
The most general way to do this is to add a coerce method to Vector. When Ruby encounters 5 * your_vector, the call to 5.*(your_vector) fails, it will then call your_vector.coerce(5). Your coerce method will pass back two items and the * method will be retried on those items.
Conceptually, something like this happens after the 5.*(your_vector) failure:
first, second = your_vector.coerce(5)
first.*(second)
The most simple approach is to pass back your_vector as the first item and 5 as the second.
def coerce(other)
case other
when Numeric
return self, other
else
raise TypeError, "#{self.class} can't be coerced into #{other.class}"
end
end
That works for commutative operations, but not so well for non-commutative operations. If you have a simple, self-contained program that only needs * to work, you could get away with it. If you're developing a library or need something more generic, and it makes sense to transform 5 into a Vector, you can do that in coerce:
def coerce(other)
case other
when Numeric
return Vector.new(other), self
else
raise TypeError, "#{self.class} can't be coerced into #{other.class}"
end
end
This is a much more robust solution, if it makes semantic sense. If it doesn't make semantic sense, you can create an intermediate type that you can transform Numeric into that does know how to multiply with Vector. This the approach that Matrix takes.
As a last resort, you can pull out the big guns and use alias_method to redefine * on Numeric to handle Vector. I'm not going to add the code for this approach, since doing it wrong will lead to disaster, and I haven't thought though any edge cases involved.

VS2013: Potential issue with optimizing move semantics for classes with vector members?

I compiled the following code on VS2013 (using "Release" mode optimization) and was dismayed to find the assembly of std::swap(v1,v2) was not the same as std::swap(v3,v4).
#include <vector>
#include <iterator>
#include <algorithm>
template <class T>
class WRAPPED_VEC
{
public:
typedef T value_type;
void push_back(T value) { m_vec.push_back(value); }
WRAPPED_VEC() = default;
WRAPPED_VEC(WRAPPED_VEC&& other) : m_vec(std::move(other.m_vec)) {}
WRAPPED_VEC& operator =(WRAPPED_VEC&& other)
{
m_vec = std::move(other.m_vec);
return *this;
}
private:
std::vector<T> m_vec;
};
int main (int, char *[])
{
WRAPPED_VEC<int> v1, v2;
std::generate_n(std::back_inserter(v1), 10, std::rand);
std::generate_n(std::back_inserter(v2), 10, std::rand);
std::swap(v1, v2);
std::vector<int> v3, v4;
std::generate_n(std::back_inserter(v3), 10, std::rand);
std::generate_n(std::back_inserter(v4), 10, std::rand);
std::swap(v3, v4);
return 0;
}
The std::swap(v3, v4) statement turns into "perfect" assembly. How can I achieve the same efficiency for std::swap(v1, v2)?
There are a couple of points to be made here.
1. If you don't know for absolutely certain that your way of calling swap is equivalent to the "correct" way of calling swap, you should always use the "correct" way:
using std::swap;
swap(v1, v2);
2. A really convenient way to look at the assembly for something like calling swap is to put the call by itself in a test function. That makes it easy to isolate the assembly:
void
test1(WRAPPED_VEC<int>& v1, WRAPPED_VEC<int>& v2)
{
using std::swap;
swap(v1, v2);
}
void
test2(std::vector<int>& v1, std::vector<int>& v2)
{
using std::swap;
swap(v1, v2);
}
As it stands, test1 will call std::swap which looks something like:
template <class T>
inline
swap(T& x, T& y) noexcept(is_nothrow_move_constructible<T>::value &&
is_nothrow_move_assignable<T>::value)
{
T t(std::move(x));
x = std::move(y);
y = std::move(t);
}
And this is fast. It will use WRAPPED_VEC's move constructor and move assignment operator.
However vector swap is even faster: It swaps the vector's 3 pointers, and if std::allocator_traits<std::vector<T>::allocator_type>::propagate_on_container_swap::value is true (and it is not), also swaps the allocators. If it is false (and it is), and if the two allocators are equal (and they are), then everything is ok. Otherwise Undefined Behavior happens.
To make test1 identical to test2 performance-wise you need:
friend
void
swap(WRAPPED_VEC<int>& v1, WRAPPED_VEC<int>& v2)
{
using std::swap;
swap(v1.m_vec, v2.m_vec);
}
One interesting thing to point out:
In your case, where you are always using std::allocator<T>, the friend function is always a win. However if your code allowed other allocators, possibly those with state, which might compare unequal, and which might have propagate_on_container_swap::value false (as std::allocator<T> does), then these two implementations of swap for WRAPPED_VEC diverge somewhat:
1. If you rely on std::swap, then you take a performance hit, but you will never have the possibility to get into undefined behavior. Move construction on vector is always well-defined and O(1). Move assignment on vector is always well-defined and can be either O(1) or O(N), and either noexcept(true) or noexcept(false).
If propagate_on_container_move_assignment::value is false, and if the two allocators involved in a move assignment are unequal, vector move assignment will become O(N) and noexcept(false). Thus a swap using vector move assignment will inherit these characteristics. However, no matter what, the behavior is always well-defined.
2. If you overload swap for WRAPPED_VEC, thus relying on the swap overload for vector, then you expose yourself to the possibility of undefined behavior if the allocators compare unequal and have propagate_on_container_swap::value equal to false. But you pick up a potential performance win.
As always, there are engineering tradeoffs to be made. This post is meant to alert you to the nature of those tradeoffs.
PS: The following comment is purely stylistic. All capital names for class types are generally considered poor style. It is tradition that all capital names are reserved for macros.
The reason for this is that std::swap does have an optimized overload for type std::vector<T> (see right click -> go to definition). To make this code work fast for your wrapper, follow instructions found on cppreference.com about std::swap:
std::swap may be specialized in namespace std for user-defined types,
but such specializations are not found by ADL (the namespace std is
not the associated namespace for the user-defined type). The expected
way to make a user-defined type swappable is to provide a non-member
function swap in the same namespace as the type: see Swappable for
details.

Resources