c++ pointer assignment confusion - c++11

Consider this MCV example
class A
{
class B
{
public:
B();
~B();
};
public:
B* a, b, c;
A();
~A();
void foo();
};
A::foo()
{
a = b = c;
}
yields the following compilation error in Visual Studio 2015
Severity Code Description Project File Line Suppression State
Error C2679 binary '=': no operator found which takes a right-hand operand of type 'A::B *' (or there is no acceptable conversion)
Strangely if I declare a, b, and c as follows
B* a; B* b, B* c;
There is no compilation issue. Because the pointers are class type, am I required to provide an appropriate B operator=(B& poo) for the original declaration to work? Certainly I can do the following int x, y, z so why is the above generating a compiler error?

The correct answer here is, don't declare multiple variables on one line. It's a pointless character saving that saves nothing on semantics whatsoever and merely leads to confusion. Don't use the std::add_pointer_t thing and don't just add more stars.

This is an anachronism from C; the pointer asterisk (*) binds to the name, and not the type, yielding:
B* a;
B b;
B c;
A better, less error prone way to declare multiple raw pointers is this:
std::add_pointer_t<B> a, b, c;
If you only have access to C++11, you need to use the more verbose std::add_pointer<B>::type instead.
In some codebases you might also find named typedefs for commonly used pointer types, like so:
typedef B* BPtr;
BPtr a, b, c;
Which yields what you'd expect. You can still use that, mixing and matching with using and std::add_pointer.
Alternatively, you can put a star in front of every name. That's why some people write this as:
B *a, *b, *c;
I'd personally discourage that. As mentioned before, it's not really readable and quite error-prone.
However, this assumes that your variables should actually be of the same type, which isn't coincidental. An example of such coincidence could be two numeric values happening to be int-s, but with no relationship between them. This is more of a design decision, though, and I assume that if you're asking about a single-type, multiple-name declaration, you understand what it entails.

In the original declaration, a is of type B*, but b and c are of type B.
To make it work as a single declaration it should be
B* a, *b, *c;
IMHO I'd leave it as separate declarations if only to avoid the entire issue.

Related

Concatenate 2 Enumerated type variable sets

enum sup;
sup=['a','b','c'];
enum sup2;
sup2=['d','e','f'];
enum sup3;
sup3=sup++sup2;
I want to get an new enumerated type sup3 with all a,b,c,d,e,f.Is there any way in minizinc we can do this.
The short answer is no, this is currently not supported. The main issue with the concatenation of enumerated types comes from the fact we are not just concatenating two lists of things, but we are combining types. Take your example:
enum sup = {A, B, C};
enum sup2 = {D, E, F};
enum sup3 = sup ++ sup2;
When I now write E somewhere in an expression, I no longer know if it has type sup2 or sup3. As you might imagine, there is no guarantee that E would have the same value (for the solver) in the two enumerated types, so this can be a big problem.
To shine a glimmer of hope, the MiniZinc team has been working on a similar approach to make this possible (but not yet formally announced). Instead of your syntax, one would write:
enum X = {A, B, C};
enum Y = {D, E, F} ++ F(X);
The idea behind this is that F(X) now gives a constructor for the usage of X in Y. This means that if we see just A, we know it's of type X, but if we see F(A), then it's of type Y. Again, this is not yet possible, but will hopefully end up in the language soon.
More of a comment but here is my example of my need. When doing code coverage and FSM transition analysis I am forced to use exclusion to not analyze some transitions for the return_to_state, in the code below. If instead I could use concatenated types as shown, I would have more control over the tools reporting missing transitions.
type Read_states is (ST1);
type Write_states is (ST2, ST3, ST4);
type SPI_states is (SPI_write);
type All_States is Read_states & Write_states & SPI_states;
I could make return_to_state of type Write_states and FRAM_state of type All_states and then not have to put in exclusions in my FSM analysis.

System Verilog typedef of typedef

typedef enums allow a convenient way to describe a set of name-value pairs. Is there a way to chain them to create deeper structures using enum at all levels?
For instance, I have the following:
typedef enum logic {ALPHA=0, BETA=1} a_t;
typedef enum logic {GAMMA=0, DELTA=1} b_t;
typedef enum logic {ZETA=0, ETA=1} c_t;
...
I want to create a variable c which is formed of a_t and b_t. Is this possible?
Something like:
a_t b_t c;
so at every dimension of c, I can have enums.
EDIT: Some clarification - assume a_t, b_t and c_t are immutable as they are generated automatically. And there are hundreds of such different enums. I want to create bigger structures as I need because automatically generating all combinations of them would make the code too big and messy.
For instance, say my a_t describes number of masters and b_t describes number of slaves. I want to create a structure where I have this hierarchy in my signal, and at the same time allow enums for them to allow easy of readability and use.
So, something like this:
c[MASTER_0][SLAVE_0]
c[MASTER_0][SLAVE_1]
c[MASTER_1][SLAVE_0]
c[MASTER_1][SLAVE_1]
Are you perhaps referring to an associative array, such as:
c[ALPHA] = BETA;
If so, you could simply refer to it as:
b_t c[a_t];
Which means create an associative array c who's key is of enums a_t and value is of enums b_t. You could keep going if you'd like :)
typedef enum logic {ALPHA=0, BETA=1} a_t;
typedef enum logic {GAMMA=0, DELTA=1} b_t;
typedef enum logic {BAD_AT=0, GREEK_LETTERS=1} c_t;
c_t my_data_structure[a_t][b_t];
// Assigning a value
my_data_structure[ALPHA][GAMMA] = GREEK_LETTERS;
See an example on EDA Playground here.
Also, I think you're slightly misunderstanding the use of typedef. It does not exactly describe a set of name-value pairs, rather it gives a new name to a data type. It is the enum that is actually creating a 'set of name-value pairs', but I'd clarify that it is essentially assigning identifiers to values. It would help if you could explain the application for a clearer answer.
You cannot create one enum typedef from another or from a group of others. Some may call that extending an enum. You also cannot have a enum with multiple names for the same value.
What you can do is have an associative array with name/value pairs, and join those arrays together.
int a[string], b[string], c[string];
initial begin
a = '{"ALPHA":0, "BETA":1};
b = '{"GAMMA":0, "DELTA":1};
c = a;
foreach(b[s]) c[s]=b[s];
end
There are ways of gathering the names of each enumerated type to initialize the associative array as well.

C++11 why the type of 'decltype(x)' and 'decltype((x))' are different?

I found they're different, and the language standard says what kind of type each statement should retrieve(difference between variable and expression). But I really wish to know why these 2 kinds of types should be different?
#include<stdio.h>
int x=0;
decltype((x)) y=x;
int main()
{
y=2;
printf("%d,",x);
decltype((1+2))&z=x;//OK (1+2) is an express, but why decltype should differ?
z=3;
printf("%d\n",x);
return 0;
}
The running result is '2,3'
So why decltype((int)) is int& by design, what's the consideration of C++ language design here? Any syntax consistency that requires such a design? (I don't wish to get "This is by design")
Thanks for your explanations.
If you read e.g. this decltype reference you will see
2) If the argument is an unparenthesized id-expression or an unparenthesized class member access expression, ...
3) If the argument is any other expression...
...
b) if the value category of expression is lvalue, then decltype yields T&;
[Emphasis mine]
And then a little further down the note
Note that if the name of an object is parenthesized, it is treated as an ordinary lvalue expression, thus decltype(x) and decltype((x)) are often different types.
Because you use a parenthesized expression it is treated as an lvalue, meaning that 3.b above is active and decltype((x)) gives you int& if x is int.
It should be noted that while the reference isn't authoritative it is derived from the specification and generally reliable and correct.
From the C++11 specification ISO/IEC 14882:2011, section 7.1.6.2 [dcl.type.simple], sub-section 4:
The type denoted by decltype(e) is defined as follows:
— if e is an unparenthesized id-expression or an unparenthesized class member access (5.2.5), decltype(e) is the type of the entity named by e. If there is no such entity, or if e names a set of overloaded functions, the program is ill-formed;
— otherwise, if e is an xvalue, decltype(e) is T&&, where T is the type of e;
— otherwise, if e is an lvalue, decltype(e) is T&, where T is the type of e;
— otherwise, decltype(e) is the type of e
And with an example:
struct A { double x; };
const A* a = new A();
...
decltype((a->x)) x4 = x3; // type is const double&
Basically exactly what the previously linked reference said.
With your example, e in the specification is (x) (since you have declspec((x))). Now the first case doesn't fit because (x) is not an unparenthesized expression. The second case doesn't fit because (x) isn't an xvalue. The third case matches though, (x) is an lvalue of type int, leading decltype((x)) to be int&.
So the answer to your query is simply: Because the specification says so.
Well, The answer I see here is that "the specification says so". I looked into stroustrup's original drafts of decltype and this is what it says.
if expr in decltype(expr) is a variable or formal parameter the
programmer can trace down the variable’s or parameter’s declaration,
and the result of decltype is exactly the declared type. If expr is a
function invocation, the programmer can perform manual overload
resolution; the result of the decltype is the return type in the
prototype of the best matching function. The prototypes of the
built-in operators are defined by the standard, and if some are
missing, the rule that an lvalue has a reference type applies.
Look at the last statement here, I think that explains. Since parenthesis are built-in operators to indicate and expression.

Can evaluation of functions happen during compile time?

Consider the below function,
public static int foo(int x){
return x + 5;
}
Now, let us call it,
int in = /*Input taken from the user*/;
int x = foo(10); // ... (1)
int y = foo(in); // ... (2)
Here, can the compiler change
int x = foo(10); // ... (1)
to
int x = 15; // ... (1)
by evaluating the function call during compile time since the input to the function is available during compile time ?
I understand this is not possible during the call marked (2) because the input is available only during run time.
I do not want to know a way of doing it in any specific language. I would like to know why this can or can not be a feature of a compiler itself.
C++ does have a method for this:
Have a read up on the 'constexpr' keyword in C++11, it allows compile time evaluation of functions.
They have a limitation: the function must be a return statement (not multiple lines of code), but can call other constexpr functions (C++14 does not have this limitation AFAIK).
static constexpr int foo(int x){
return x + 5;
}
EDIT:
Why a compiler might not evaluate a function (just my guess):
It might not be appropriate to remove a function by evaluating it without being told.
The function could be used in different compilation units, and with static/dynamic inputs: thus evaluating it in some circumstances and adding a call in other places.
This use would provide inconsistent execution times (especially on a deterministic platform like AVR) where timing may be important, or at least need to be predictable.
Also interrupts (and how the compiler interacts with them) may come into play here.
EDIT:
constexpr is actually stronger -- it requires that the compiler do this. The compiler is free to fold away functions without constexpr, but the programmer can't rely on it doing so.
Can you give an example in the case where the user would have benefited from this but the compiler chose not to do it ?
inline functions may, or may not resolve to constant expressions which could be optimized into the end result.
However, a constexpr guarantees it. An inline function cannot be used as a compile time constant whereas constexpr can allow you to formulate compile time functions and more so, objects.
A basic example where constexpr makes a guarantee that inline cannot.
constexpr int foo( int a, int b, int c ){
return a+b+c;
}
int array[ foo(1, 2, 3) ];
And the same as a simple object.
struct Foo{
constexpr Foo( int a, int b, int c ) : val(a+b+c){}
int val;
};
constexpr Foo foo( 1,2,4 );
int array[ foo.val ];
Unless foo.val is a compile time constant, the code above will not compile.
Even as just a function, an inline function has no guarantee. And the linker can also do inlining over multiple compilation units, after the syntax has been compiled (array bounds checked for integer constants).
This is kind of like meta-programming, but without the templates. Of course these examples do not do the topic justice, however very complex solutions would benefit from the ability to use objects and functional programming to achieve a result.
Yes, evaluation can happen during compile time. This comes under the heading of constant folding and function inlining, both of which are common optimizations for optimizing compilers.
Many languages do not have strong distinction between "compile time" and "run time", but the general rule is that the language defines an "execution model" which defines the behavior of any particular program with any particular input (or specifies that it is undefined). The compiler must produce an executable that can read any input and produce the corresponding output as defined by the execution model. What happens inside the executable doesn't matter -- as long as the externally viewed behavior is correct.
Here "input", "output" and "behavior" includes all possible interactions with the environment that are defined in the execution model, including timing effects.

VS2013: Potential issue with optimizing move semantics for classes with vector members?

I compiled the following code on VS2013 (using "Release" mode optimization) and was dismayed to find the assembly of std::swap(v1,v2) was not the same as std::swap(v3,v4).
#include <vector>
#include <iterator>
#include <algorithm>
template <class T>
class WRAPPED_VEC
{
public:
typedef T value_type;
void push_back(T value) { m_vec.push_back(value); }
WRAPPED_VEC() = default;
WRAPPED_VEC(WRAPPED_VEC&& other) : m_vec(std::move(other.m_vec)) {}
WRAPPED_VEC& operator =(WRAPPED_VEC&& other)
{
m_vec = std::move(other.m_vec);
return *this;
}
private:
std::vector<T> m_vec;
};
int main (int, char *[])
{
WRAPPED_VEC<int> v1, v2;
std::generate_n(std::back_inserter(v1), 10, std::rand);
std::generate_n(std::back_inserter(v2), 10, std::rand);
std::swap(v1, v2);
std::vector<int> v3, v4;
std::generate_n(std::back_inserter(v3), 10, std::rand);
std::generate_n(std::back_inserter(v4), 10, std::rand);
std::swap(v3, v4);
return 0;
}
The std::swap(v3, v4) statement turns into "perfect" assembly. How can I achieve the same efficiency for std::swap(v1, v2)?
There are a couple of points to be made here.
1. If you don't know for absolutely certain that your way of calling swap is equivalent to the "correct" way of calling swap, you should always use the "correct" way:
using std::swap;
swap(v1, v2);
2. A really convenient way to look at the assembly for something like calling swap is to put the call by itself in a test function. That makes it easy to isolate the assembly:
void
test1(WRAPPED_VEC<int>& v1, WRAPPED_VEC<int>& v2)
{
using std::swap;
swap(v1, v2);
}
void
test2(std::vector<int>& v1, std::vector<int>& v2)
{
using std::swap;
swap(v1, v2);
}
As it stands, test1 will call std::swap which looks something like:
template <class T>
inline
swap(T& x, T& y) noexcept(is_nothrow_move_constructible<T>::value &&
is_nothrow_move_assignable<T>::value)
{
T t(std::move(x));
x = std::move(y);
y = std::move(t);
}
And this is fast. It will use WRAPPED_VEC's move constructor and move assignment operator.
However vector swap is even faster: It swaps the vector's 3 pointers, and if std::allocator_traits<std::vector<T>::allocator_type>::propagate_on_container_swap::value is true (and it is not), also swaps the allocators. If it is false (and it is), and if the two allocators are equal (and they are), then everything is ok. Otherwise Undefined Behavior happens.
To make test1 identical to test2 performance-wise you need:
friend
void
swap(WRAPPED_VEC<int>& v1, WRAPPED_VEC<int>& v2)
{
using std::swap;
swap(v1.m_vec, v2.m_vec);
}
One interesting thing to point out:
In your case, where you are always using std::allocator<T>, the friend function is always a win. However if your code allowed other allocators, possibly those with state, which might compare unequal, and which might have propagate_on_container_swap::value false (as std::allocator<T> does), then these two implementations of swap for WRAPPED_VEC diverge somewhat:
1. If you rely on std::swap, then you take a performance hit, but you will never have the possibility to get into undefined behavior. Move construction on vector is always well-defined and O(1). Move assignment on vector is always well-defined and can be either O(1) or O(N), and either noexcept(true) or noexcept(false).
If propagate_on_container_move_assignment::value is false, and if the two allocators involved in a move assignment are unequal, vector move assignment will become O(N) and noexcept(false). Thus a swap using vector move assignment will inherit these characteristics. However, no matter what, the behavior is always well-defined.
2. If you overload swap for WRAPPED_VEC, thus relying on the swap overload for vector, then you expose yourself to the possibility of undefined behavior if the allocators compare unequal and have propagate_on_container_swap::value equal to false. But you pick up a potential performance win.
As always, there are engineering tradeoffs to be made. This post is meant to alert you to the nature of those tradeoffs.
PS: The following comment is purely stylistic. All capital names for class types are generally considered poor style. It is tradition that all capital names are reserved for macros.
The reason for this is that std::swap does have an optimized overload for type std::vector<T> (see right click -> go to definition). To make this code work fast for your wrapper, follow instructions found on cppreference.com about std::swap:
std::swap may be specialized in namespace std for user-defined types,
but such specializations are not found by ADL (the namespace std is
not the associated namespace for the user-defined type). The expected
way to make a user-defined type swappable is to provide a non-member
function swap in the same namespace as the type: see Swappable for
details.

Resources