Function Template Type Deduction Rules - c++11

I have:
template<typename ...Ts, typename U, typename=void>
void valid(Ts..., U){}
int main() {
valid(1.0, 1, 2, 3);
}
clang complains:
note: candidate function [with Ts = <>, U = double, $2 = void] not viable: requires 1 argument, but 4 were provided
void valid(Ts..., U){}
^
And gcc complains:
<source>:2:6: note: template argument deduction/substitution failed:
<source>:5:10: note: candidate expects 1 argument, 4 provided
5 | valid(1.0, 1, 2, 3);
| ~~~~~^~~~~~~~~~~~~~
According to cling, it seems like the compiler deduces that Tn is an empty pack (i.e. <>). Why is that? I thought given the argument list (1.0, 1, 2, 3), U would be deduced to int and Tn... would be double, int, int. What are the rules here for the type deduction?
If I changes the caller to:
valid<double, int, int>(1.0, 1, 2, 3);
It works. However, if I change it to:
valid<double, int, int, int>(1.0, 1, 2, 3);
It fails:
<source>:2:6: note: template argument deduction/substitution failed:
<source>:5:33: note: candidate expects 5 arguments, 4 provided
5 | valid<double, int, int, int>(1.0, 1, 2, 3);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~
What rules are used here to determine what types I can specify in the type parameter list?
Thanks.

First, why the template argument deduction fails when no template arguments are explicitly specified, as in valid(1.0, 1, 2, 3). This is simple - a parameter pack that's not trailing (not the last in the function parameter list) is non-deduced context:
[temp.deduct.type]/5 The non-deduced contexts are:
...
(5.7) — A function parameter pack that does not occur at the end of the parameter-declaration-list.
Second, how explicitly specified template arguments are matched to template parameters, in the presence of a parameter pack. The way it works is, once you get to an argument that corresponds to the pack, all subsequent arguments are taken to be part of that pack.
[temp.deduct.type]/9 ... If Pi is a pack expansion, then the pattern of Pi is compared with each remaining argument in the template argument list of A. Each comparison deduces template arguments for subsequent positions in the template parameter packs expanded by Pi.
Thus, valid<double, int, int>(1.0, 1, 2, 3) works because Ts... is taken to be three types {double, int, int}, and U is deduced from the fourth argument, 3. valid<double, int, int, int>(1.0, 1, 2, 3) doesn't work because Ts... is taken to be four types {double, int, int, int}, and then there's no function argument for the last parameter of type U.

Related

Eigen::Matrix<float, 4, 4, 4, 4, 4> vs Eigen::Matrix<float, -1, -1, 0, -1, -1>

I'm trying to understand the debug output of CodeLLDB in vs-code.
I have two programs doing almost the same thing, but while debugging, I can see the following difference:
The way an Eigen matrix is displayed differs.
In program A, a matrix is shown as
Eigen::Matrix<float, 4, 4, 4, 4, 4>
and in program B a similar matrix is shown as
Eigen::Matrix<float, -1, -1, 0, -1, -1> in the CodeLLDB debugger.
This is after only initializing the matrices (before resizing them to some required number of rows and columns). What do these numbers indicate? (they clearly are not related to the matrix size...)
The Matrix class is documented here. A Matrix is a templated class allowing standard numeric or user defined types, dynamic or fixed sizing for rows and columns and other options. The signature is:
template<typename Scalar_, int Rows_, int Cols_, int Options_, int MaxRows_, int MaxCols_>
class Eigen::Matrix< Scalar_, Rows_, Cols_, Options_, MaxRows_, MaxCols_ >
The Scalar_ is the type of the matrix, float, double, int, whatever.
Rows_ and Cols_ are the sizes of the matrix. Dynamic is a constant -1. If a positive quantity is not provided at compile time this tells Eigen that the matrix is dynamic. These values are related to MaxRows_ and MaxCols_.
Options_ is the most complicated one.
Options_ A combination of either RowMajor or ColMajor, and of either AutoAlign or DontAlign. The former controls storage order, and defaults to column-major. The latter controls alignment, which is required for vectorization. It defaults to aligning matrices except for fixed sizes that aren't a multiple of the packet size.
There is an enum in Constants.h that defines some of the values. Normally 0 is there as most matrices are defaulted to be auto-aligned and column major. RowMajor is 1 and DontAlign is 2.
So to decode your two matrices:
Matrix<float,4,4,4,4,4> is a matrix holding float values of fixed size 4 x 4 with option of 4. Note that 4 is not a valid template parameter so some operation has happened on that matrix which set the options to that value. This is equivalent to the typedef Matrix4f (again with some other operation).
Matrix<float,-1,-1,0,-1,-1> is a matrix holding float values of dynamic size that is column-major and auto aligned. This is equivalent to MatrixXf.

Binary operators - Operand types supposed to be identical?

In the below code:
package main
import "fmt"
func main() {
a := 1e6
b := 2
fmt.Println(b / a)
fmt.Println(2 / a)
}
why 2/a is valid syntax?
Why b/a is an invalid syntax?
why 2/a is valid syntax?
"For other binary operators, the operand types must be identical unless the operation involves shifts or untyped constants.". 2 is an untyped constant.
Why b/a is an invalid syntax?
It isn't. It's syntactically perfectly valid; it's just semantically wrong because it's an operation on mismatched types.
It is not invalid syntax. It is mismatched types:
invalid operation: b / a (mismatched types int and float64)
For binary operations, types must be identical.
b is an int, and a is a float64, so to perform an operation, you have to be explicit about the types:
float64(b)/a
This would make both operands float64. So would:
b/int(a)
where both operands are int.
This is not a problem for 2/a because 2 is an untyped constant, and its actual type is determined based on context. Since a is float64, in this case 2 is a float64.
Other answers offer good points, I just wanted to add how I remember the rules. First this example:
a := 1e6
b := 2
fmt.Println(b / a)
With this example, both values are "type locked", that is to say, you're performing the division after declaring a type for each variable. Since the types don't match, the operation fails:
invalid operation: b / a (mismatched types int and float64)
This example:
a := 1e6
fmt.Println(2 / a)
Now 2 has not be assigned a type, so it can essentially be any numerical type. Since a is float64, then 2 just says "OK I will be float64 too", and the division is successful.

Why pointer dereference is different object in slices

func Test(t *testing.T) {
a1 := 5
a2 := 6
a3 := 7
p1 := &a1
p2 := &a2
p3 := &a3
s := make([]int, 0)
s = append(s, *p1, *p2, *p3)
a1 = 8
if s[0] != 8 {
t.Fail()
}
}
Why does the test fail?
I don't have much experience in Go, but I thought that two variables can have the same value that is stored in one memory address. But what I see from the test bellow it's not correct.
Doesn't it mean that if I want to change some variable from different places in my program I have to store it in variable and then use only pointers everywhere in the code?
This has nothing to do with the use of slice types per se, and everything to do with the fact that you're storing int values.
It is true that p1, p2, and p3 have type *int (pointer to int). They point to a1, a2, and a3 respectively (and those three have type int, due to the default type rules, which dictate that the untyped constants 5, 6, and 7 become int here). But this means that *p1, *p2, and *p3 have type int. So:
s = append(s, *p1, *p2, *p3)
does the same thing as:
s = append(s, 5, 6, 7)
would do (given that *p1 is 5, and so on, which we can see by just looking at the code—none of the a variables have changed yet).
You then change variable a1 to 8. So *p1 is also now 8, but you've built your slice out of the three values 5, 6, and 7, and those values are not changed here. If you print *p1, you will see that it is now 8. If you assign to *p2, then print a2, you will see that a2 is updated by assigning to *p2.
As zerkms noted in a comment, if you had built your slice out of type *int and made s[0] point to a1, then changing a1 would also change *s[0]. Similarly, changing *s[1] would change a2. You didn't do that, though, so they don't. Again, though, it's not a matter of whether you use a slice here, but rather whether the variables in question are pointers, and if so, what they point-to1 at the time you use their values.
1I hyphenated this to avoid the urge to write "to what they point". :-)

what does the symbol [:] mean in Go?

I found this in some code:
h := s.Hash(tx)
sig, err := crypto.Sign(h[:], prv)
What does [:] mean ?
If this is the full slice of the array, why not to pass the array itself? What kind of coding style is this, I wonder...
In go, Arrays and Slices are slightly different and cannot be used interchangeably; however, you can make a slice from an array easily using the [:] operator.
This article explains in detail - Go Slices: Usage and Internals.
See also the Slice Expressions section of The Go Programming Language Specification.
In a nutshell, the [:] operator allows you to create a slice from an array, optionally using start and end bounds. For example:
a := [4]int{1, 2, 3, 4} // "a" has type [4]int (array of 4 ints)
x := a[:] // "x" has type []int (slice of ints) and length 4
y := a[:2] // "y" has type []int, length 2, values {1, 2}
z := a[2:] // "z" has type []int, length 2, values {3, 4}
m := a[1:3] // "m" has type []int, length 2, values {2, 3}
Presumably the reason for this distinction is for an extra measure of type safety. That is, length is a formal part of an array type (e.g. [4]int is an array of four ints and a different size is a different type) whereas slices can have any length, including zero. So if you want the safety of a known-length sequence then use an Array, otherwise use a Slice for convenience.

Creating a Vector of Vectors in Rust

This code won't compile:
fn main() {
let m1 = vec![1, 2, 3];
let m2 = vec![&m1, &m1, &m1];
let m3 = vec![vec![1, 2, 3], vec![4, 5, 6], vec![7, 8, 9]];
for i in &m2 {
for j in i {
println!("{}", j);
}
}
for i in &m3 {
for j in i {
println!("{}", j);
}
}
}
error[E0277]: the trait bound `&&std::vec::Vec<{integer}>: std::iter::Iterator` is not satisfied
--> src/main.rs:8:18
|
8 | for j in i {
| ^ `&&std::vec::Vec<{integer}>` is not an iterator; maybe try calling `.iter()` or a similar method
|
= help: the trait `std::iter::Iterator` is not implemented for `&&std::vec::Vec<{integer}>`
= note: required by `std::iter::IntoIterator::into_iter`
How is m2 different than m3 such that m3 causes no issues, but m2 prevents compilation?
Is there an easier way to create a vector of vector of... to any desired depth? The way I have it working (m3) seems so clunky.
How is m2 different than m3 ...
Check the types step by step. m1 is of type Vec<isize> (it could be any other integer type as well, but I assume it's isize for now). Why? Because the elements in the vec![] macro are of type isize. Now you are creating m2:
let m2 = vec![&m1, &m1, &m1];
What is the type of the elements in this macro? Well we already said m1 has the type Vec<isize>, so &m1 has the type &Vec<isize>. So the resulting type of m2 is Vec<&Vec<isize>> (a vector full of references to other vectors).
However, m3 is of type Vec<Vec<isize>>, since the elements in the (outer) vec![] macro are of type Vec<isize> (no reference!).
Hint: to easily check the type of any variable (such as foo), type:
let _: () = foo;
This will result in a compiler error that tells you the type of foo.
... such that m3 causes no issues, but m2 prevents compilation?
Now that we know the types of m2 and m3, lets look at the loops. for loops work by accepting something that implements IntoIterator. You are passing &m2, which is of type &Vec<&Vec<isize>> (note the two references). We can see, that IntoIterator is indeed implemented for a reference to a vector:
impl<T> IntoIterator for &Vec<T> {
type Item = &T
// ...
}
This means that you get an iterator that spits out references to the inner type T (type Item = &T). Our inner type of m2 is &Vec<isize>, so we will get items of type &&Vec<isize> (two references!). Your variable i has this exact type.
Then you want to iterate again with this inner loop:
for j in i { ... }
But i has this double-reference type and there isn't an implementation of IntoIterator for that type. To iterate it, you have to either dereference it like:
for j in *i { ... }
Or even better: make i be of the type &Vec<isize> (one reference!) by stripping it away with pattern matching in the outer loop:
for &i in &m2 { ... }
Your m3 loop does the same, but since m3 is of another type (with one reference less), it works (I hope you can see why).
Is there an easier way to create a vector of vector of... to any desired depth
Even if m2 worked, it wouldn't hold the same values as m3. To make m2 of the type Vec<Vec<isize>> (like m3), you should clone m1 instead of taking a reference to it.
let m2 = vec![m1.clone(), m1.clone(), m1.clone()];
We can do even better by using the vec![_; _] form of the macro:
let m2 = vec![m1; 3]; // three times the value of `m1`
As a last note, you should consider not using nested Vecs. The nesting creates overhead because the values are spread over the whole memory instead of being in one place.

Resources