std::initializer_list<int> FOO = {1, 2, 3};
const std::initializer_list<int> BAR = {1, 2, 3};
What are the differences between those two variables? From my understanding of std::initializer_list, access is const-only anyway. Does making the entire thing const actually change anything?
Turns out std::initializer_list has a generated operator =. So the second declaration prevents
BAR = {};
Since every member function of std::initializer_list is qualified as const, I cannot think of any practical situation where this would make a difference.
If you were directly playing with its type (e.g. using std::is_same or std::is_const) then the const would matter.
Related
In the initialization of a vector of pairs
std::vector<std::pair<int, std::string>> foo{{1.0, "one"}, {2.0, "two"}};
how am I supposed to interpret the construction of foo? As I understand it,
The constructor is called with braced initialization syntax so the overload vector( std::initializer_list<T> init, const Allocator& alloc = Allocator() ); is strongly preferred and selected
The template parameter of std::initializer_list<T> is deduced to std::pair<int, std::string>
Each element of foo is a std::pair. However, std::pair has no overload accepting std::initializer_list.
I am not so sure about step 3. I know the inner braces can't be interpreted as std::initializer_list since they are heterogenous. What mechanism in the standard is actually constructing each element of foo? I suspect the answer has something to do with forwarding the arguments in the inner braces to the overload template< class U1, class U2 pair( U1&& x, U2&& y ); but I don't know what this is called.
EDIT:
I figure a simpler way to ask the same question would be: When one does
std::map<int, std::string> m = { // nested list-initialization
{1, "a"},
{2, {'a', 'b', 'c'} },
{3, s1}
as shown in the cppreference example, where in the standard does it say that {1, "a"}, {2, {'a', 'b', 'c'} }, and {3, s1} each get forwarded to the constructor for std::pair<int, std::string>?
Usually, expressions are analyzed inside-out: The inner expressions have types and these types then decide which meaning the outer operators have and which functions are to be called.
But initializer lists are not expressions, and have no type. Therefore, inside-out does not work. Special overload resolution rules are needed to account for initializer lists.
The first rule is: If there are constructors with a single parameter that is some initializer_list<T>, then in a first round of overload resolution only such constructors are considered (over.match.list).
The second rule is: For each initializer_list<T> candidate (there could be more than one of them per class, with different T each), it is checked that each initializer can be converted to T, and only those candidates remain where this works out (over.ics.list).
This second rule is basically, where the initializer-lists-have-no-type hurdle is taken and inside-out analysis is resumed.
Once overload resolution has decided that a particular initializer_list<T> constructor should be used, copy-initialization is used to initialize the elements of type T of the initializer list.
You are confusing two different concepts:
1) initializer lists
initializer_list<T>: that is mainly used for initialization of collections. In this case, all of the members should be of the same type. (not applicable for std::pair)
Example:
std::vector<int> vec {1, 2, 3, 4, 5};
2) Uniform initialization
Uniform initialization: in which the braces are used to construct and initialize some objects like structs, classes (with an appropriate constructor) and basic types (int, char, etc.).
Example:
struct X { int x; std::string s;};
X x{1, "Hi"}; // Not an initializer_list here.
Having mentioned that, for initialization of a std::pair with a brace initializer, you will need a constructor that takes two elements, i.e. the first and the second elements, not a std::initializer_list<T>. For example on my machine with VS2015 installed, this constructor looks like this:
template<class _Other1,
class _Other2,
class = enable_if_t<is_constructible<_Ty1, _Other1>::value
&& is_constructible<_Ty2, _Other2>::value>,
enable_if_t<is_convertible<_Other1, _Ty1>::value
&& is_convertible<_Other2, _Ty2>::value, int> = 0>
constexpr pair(_Other1&& _Val1, _Other2&& _Val2) // -----> Here the constructor takes 2 input params
_NOEXCEPT_OP((is_nothrow_constructible<_Ty1, _Other1>::value
&& is_nothrow_constructible<_Ty2, _Other2>::value))
: first(_STD forward<_Other1>(_Val1)), // ----> initialize the first
second(_STD forward<_Other2>(_Val2)) // ----> initialize the second
{ // construct from moved values
}
Frameworks I've seen before allow to pass a chain of multiple constants via a single parameter like So, I believe:
foo(FLAG_A | FLAG_B | FLAG_C);
They act like boolean so the function knows which flags have been given.
Now I want to implement something like that.
What is this concept called?
It's based on binary-ORing. Normally, each of the symbolic constants will be just one distinct bit, e.g., as in:
enum {
FLAG_A = 1,
FLAG_B = 1<<1,
FLAG_C = 1<<2,
};
so that you can than add them together with |, test for each individual one with & and subtract two such flag sets with & ~.
In .Net, these are defined by an enum using the FlagsAttribute:
[Flags()]
public enum Foo
{
Bit1 = 1,
Bit2 = 2,
Bit4 = 4,
Bit8 = 8,
etc.
}
// Or define using explicit binary syntax
[Flags()]
public enum Foo
{
Bit1 = 0b_0000_0001,
Bit2 = 0b_0000_0010,
Bit4 = 0b_0000_0100,
Bit8 = 0b_0000_,
etc.
}
And to utilise:
SomeFunction(Foo.Bit1 | Foo.Bit4 | etc);
I would suggest that your current name (Flags) seems to be the most appropriate definition, at least in this context.
Apparently "flags and bitmasks" are the right keywords to find more about this. "Flags" alone didn't before. Great thanks for the explanatory answers nevertheless!
I am having trouble trying to understand pattern matching rules in Rust. I originally thought that the idea behind patterns are to match the left-hand side and right-hand side like so:
struct S {
x: i32,
y: (i32, i32)
}
let S { x: a, y: (b, c) } = S { x: 1, y: (2, 3) };
// `a` matches `1`, `(b, c)` matches `(2, 3)`
However, when we want to bind a reference to a value on the right-hand side, we need to use the ref keyword.
let &(ref a, ref b) = &(3, 4);
This feels rather inconsistent.
Why can't we use the dereferencing operator * to match the left-hand side and right-hand side like this?
let &(*a, *b) = &(3, 4);
// `*a` matches `3`, `*b` matches `4`
Why isn't this the way patterns work in Rust? Is there a reason why this isn't the case, or have I totally misunderstood something?
Using the dereferencing operator would be very confusing in this case. ref effectively takes a reference to the value. These are more-or-less equivalent:
let bar1 = &42;
let ref bar2 = 42;
Note that in let &(ref a, ref b) = &(3, 4), a and b both have the type &i32 — they are references. Also note that since match ergonomics, let (a, b) = &(3, 4) is the same and shorter.
Furthermore, the ampersand (&) and asterisk (*) symbols are used for types. As you mention, pattern matching wants to "line up" the value with the pattern. The ampersand is already used to match and remove one layer of references in patterns:
let foo: &i32 = &42;
match foo {
&v => println!("{}", v),
}
By analogy, it's possible that some variant of this syntax might be supported in the future for raw pointers:
let foo: *const i32 = std::ptr::null();
match foo {
*v => println!("{}", v),
}
Since both ampersand and asterisk could be used to remove one layer of reference/pointer, they cannot be used to add one layer. Thus some new keyword was needed and ref was chosen.
See also:
Meaning of '&variable' in arguments/patterns
What is the syntax to match on a reference to an enum?
How can the ref keyword be avoided when pattern matching in a function taking &self or &mut self?
How does Rust pattern matching determine if the bound variable will be a reference or a value?
Why does pattern matching on &Option<T> yield something of type Some(&T)?
In this specific case, you can achieve the same with neither ref nor asterisk:
fn main() {
let (a, b) = &(3, 4);
show_type_name(a);
show_type_name(b);
}
fn show_type_name<T>(_: T) {
println!("{}", std::any::type_name::<T>()); // rust 1.38.0 and above
}
It shows both a and b to be of type &i32. This ergonomics feature is called binding modes.
But it still doesn't answer the question of why ref pattern in the first place. I don't think there is a definite answer to that. The syntax simply settled on what it is now regarding identifier patterns.
I am writing a function in C++
int maxsubarray(vector<int>&nums)
say I have a vector
v={1,2,3,4,5}
I want to pass
{3,4,5}
to the function,i.e. pass the vector starting from index 2. In C I know I can call maxsubarray(v+2)
but in C++ it doesn't work. I can modify the function by adding start index parameter to make it work of course. Just want to know can I do it without modifying my original function?
THX
You will have to create a temporary vector with the part you want to pass:
std::vector<int> v = {1,2,3,4,5};
std::vector<int> v2(v.begin() + 2, v.end());
maxsubarray(v2);
The obvious solution is to make a new vector and pass that one instead. I definitely do not recommend that. The most idiomatic way is to make your function take iterators:
template<typename It>
It::value_type maxsubarray(It begin, It end) { ... }
and then use it like this:
std::vector<int> nums(...);
auto max = maxsubarray(begin(nums) + 2, end(nums));
Anything else involving copies, is just inefficient and not necessary.
Not without constructing another vector.
You can either build a new vector a pass it by reference to the function (but this might not be ideal from a performance point of view. You generally pass by reference to avoid unnecessary copies) or use pointers:
//copy the vector
std::vector<int> copy(v.begin()+2, v.end());
maxsubarray(copy);
//pass a pointer to the given element
int maxsubarray(int * nums)
maxsubarray(&v[2]);
You could try calling it with a temporary:
int myMax = maxsubarray(vector<int>(v.begin() + 2, v.end()));
That might require changing the function signature to
int maxsubarray(const vector<int> &nums);
since (I think) temporaries can't bind to non-const references, but that change should be preferred here if maxsubarray won't modify nums.
I was playing with bind and I was thinking, are lambdas as expensive as function pointers?
What I mean is, as I understand lambdas, they are syntactic sugar for functors and bind is similar. However, if you do this:
#include<functional>
#include<iostream>
void fn2(int a, int b)
{
std::cout << a << ", " << b << std::endl;
}
void fn1(int a, int b)
{
//auto bound = std::bind(fn2, a, b);
//static auto bound = std::bind(fn2, a, b);
//auto bound = [&]{ fn2(a, b); };
static auto bound = [&]{ fn2(a, b); };
bound();
}
int main()
{
fn1(3, 4);
fn1(1, 2);
return 0;
}
Now, if I were to use the 1st auto bound = std::bind(fn2, a, b);, I get an output of 3, 4
1, 2, the 2nd I get 3, 4
3, 4. The 3rd and 4th I get output like the 1st.
Now I get why the 1st and 2nd work that way, they are getting initialised at the beginning of the function call (the static one, only the 1st time it is called). However, 3 and 4 seem to have compiler magic going on where the generated functors are not really creating references to the enclosing scope's variables, but are actually latching on to the symbols whether or not it is initialised only the first time or every time.
Can someone clarify what is actually happening here?
Edit: What I was missing is using static auto bound = std::bind(fn2, std::ref(a), std::ref(b)); to have it work as the 4th option.
You have this code:
static auto bound = [&]{ fn2(a, b); };
Assignment is done only first time you are invoking this function because it's static. So in fact it's called only once. Compiler creates closure when you are making lambdas, so references to a and b from first call to fn1 was captured. It's very risky. It may lead to dangling references. I'm surprised it didn't crashed since you are making closure from function parameters passed by value - to local variables.
I recommend this excellent article about lambdas: http://www.cprogramming.com/c++11/c++11-lambda-closures.html .
As a general rule, only use [&] lambdas when your closure is going to go away by the end of the current scope.
If it is going to outlast the current scope, and you need by-reference, explicitly capture the things you are going to capture, or create local pointers to the things you are going to capture and capture them by-value.
In your case, your static lambda code is full of undefined behavior, as you [&] capture a and b in the first call, then use it in the second call.
In theory, the compiler could rewrite your code to capture a and b by value instead of by reference, then call that every time, because the only difference between that implementation and the one you wrote occurs when the behavior is undefined, and the result will be much faster.
It could do a more efficient job by ignoring your static completely, as the entire state of your static object is undefined after you leave scope the first time you call, and the construction has no visible side effects.
To fix your problem with the lambdas, use [=] or [a,b] to introduce the lambda, and it will capture the a and b by value. I prefer to capture state explicitly on lambdas when I expect the lambda to persist longer than the current block.