C++11 Multiline lambdas can deduce intrinsic types? - c++11

I use C++11 lambdas quite a lot, and I've often run into compile errors on multiline lambdas because I forgot to add the return type, as is expected, but I recently ran into one example that doesn't have this issue. It looks something like this:
auto testLambda = [](bool arg1, bool arg2)
{
if (arg1)
{
if (!arg2)
{
return false;
}
return true;
}
return false;
};
This compiles just fine even though there's no return type specified. Is this just Visual Studio being dumb and allowing something it shouldn't, or can lambdas just always deduce intrinsic types?
I tried this with return values of all ints or floating point values and it also compiled just fine. I just found this to be really surprising so I wanted to be absolutely sure how it works before I start making assumptions and omitting return types that might break later on.

Lambdas follow the same template deduction rules as auto-returning functions:
Template argument deduction is used in declarations of functions, when deducing the meaning of the auto specifier in the function's return type, from the return statement.
For auto-returning functions, the parameter P is obtained as follows: in T, the declared return type of the function that includes auto, every occurrence of auto is replaced with an imaginary type template parameter U. The argument A is the expression of the return statement, and if the return statement has no operand, A is void(). After deduction of U from P and A following the rules described above, the deduced U is substituted into T to get the actual return type:
auto f() { return 42; } // P = auto, A = 42:
// deduced U = int, the return type of f is int
If such function has multiple return statements, the deduction is performed for each return statement. All the resulting types must be the same and become the actual return type.
If such function has no return statement, A is void() when deducing.
Note: the meaning of decltype(auto) placeholder in variable and function declarations does not use template argument deduction.

Related

go generics: how to declare a type parameter compatible with another type parameter

I'm looking for a way to declare type compatibility between type parameters in Go generics constraints.
More specifically, I need to say some type T is compatible with another type U. For instance, T is a pointer to a struct that implements the interface U.
Below is a concrete example of what I want to accomplish:
NOTE: Please, do not answer with alternative ways to implement "array prepend". I've only used it as a concrete application of the problem I'm looking to solve. Focusing on the specific example digresses the conversation.
func Prepend[T any](array []T, values ...T) []T {
if len(values) < 1 { return array }
result := make([]T, len(values) + len(array))
copy(result, values)
copy(result[len(values):], array)
return result
}
The above function can be called to append elements of a given type T to an array of the same type, so the code below works just fine:
type Foo struct{ x int }
func (self *Foo) String() string { return fmt.Sprintf("foo#%d", self.x) }
func grow(array []*Foo) []*Foo {
return Prepend(array, &Foo{x: len(array)})
}
If the array type is different than the elements being added (say, an interface implemented by the elements' type), the code fails to compile (as expected) with type *Foo of &Foo{…} does not match inferred type Base for T:
type Base interface { fmt.Stringer }
type Foo struct{ x int }
func (self *Foo) String() string { return fmt.Sprintf("foo#%d", self.x) }
func grow(array []Base) []Base {
return Prepend(array, &Foo{x: len(array)})
}
The intuitive solution to that is to change the type parameters for Prepend so that array and values have different, but compatible types. That's the part I don't know how to express in Go.
For instance, the code below doesn't work (as expected) because the types of array and values are independent of each other. Similar code would work with C++ templates since the compatibility is validated after template instantiation (similar to duck typing). The Go compiler gives out the error invalid argument: arguments to copy result (variable of type []A) and values (variable of type []T) have different element types A and T:
func Prepend[A any, T any](array []A, values ...T) []A {
if len(values) < 1 { return array }
result := make([]A, len(values) + len(array))
copy(result, values)
copy(result[len(values):], array)
return result
}
I've tried making the type T compatible with A with the constraint ~A, but Go doesn't like a type parameter used as type of a constraint, giving out the error type in term ~A cannot be a type parameter:
func Prepend[A any, T ~A](array []A, values ...T) []A {
What's the proper way to declare this type compatibility as generics constraints without resorting to reflection?
This is a limitation of Go's type parameter inference, which is the system that tries to automatically insert type parameters in cases where you don't define them explicitly. Try adding in the type parameter explicitly, and you'll see that it works. For example:
// This works.
func grow(array []Base) []Base {
return Prepend[Base](array, &Foo{x: len(array)})
}
You can also try explicitly converting the *Foo value to a Base interface. For example:
// This works too.
func grow(array []Base) []Base {
return Prepend(array, Base(&Foo{x: len(array)}))
}
Explanation
First, you should bear in mind that the "proper" use of type parameters is to always include them explicitly. The option to omit the type parameter list is considered a "nice to have", but not intended to cover all use cases.
From the blog post An Introduction To Generics:
Type inference in practice
The exact details of how type inference works are complicated, but using it is not: type inference either succeeds or fails. If it succeeds, type arguments can be omitted, and calling generic functions looks no different than calling ordinary functions. If type inference fails, the compiler will give an error message, and in those cases we can just provide the necessary type arguments.
In adding type inference to the language we’ve tried to strike a balance between inference power and complexity. We want to ensure that when the compiler infers types, those types are never surprising. We’ve tried to be careful to err on the side of failing to infer a type rather than on the side of inferring the wrong type. We probably have not gotten it entirely right, and we may continue to refine it in future releases. The effect will be that more programs can be written without explicit type arguments. Programs that don’t need type arguments today won’t need them tomorrow either.
In other words, type inference may improve over time, but you should expect it to be limited.
In this case:
// This works.
func grow(array []*Foo) []*Foo {
return Prepend(array, &Foo{x: len(array)})
}
It is relatively simple for the compiler to match that the argument types of []*Foo and *Foo match the pattern []T and ...T by substitutingT = *Foo.
So why does the plain solution you gave first not work?
// Why does this not work?
func grow(array []Base) []Base {
return Prepend(array, &Foo{x: len(array)})
}
To make []Base and *Foo match the pattern []T and ...T, just substituting T = *Foo or T = Base provides no apparent match. You have to apply the rule that *Foo is assignable to the type Base to see that T = Base works. Apparently the inference system doesn't go the extra mile to try to figure that out, so it fails here.

Is that possible to create a comparison operator from string?

I'm trying to create a function that will produce an if condition from a predefined array.
for example:
package errors
type errorCase struct {
// This is the field I need to get in another struct
Field string
// The comparison operator
TestOperator string
// The value that the expected one should not with equal...
WrongValue interface{}
}
var ErrorCases = []*errorCase{ {
"MinValue",
"<",
0,
}, {
"MaxValue",
"==",
0,
}}
Actually I made a new function with a for loop that iterate through all of these "error cases"
func isDirty(questionInterface models.QuestionInterface) bool {
for _, errorCase := range errors.ErrorCases {
s := reflect.ValueOf(&questionInterface).Elem()
value := s.Elem().FieldByName(errorCase.Field)
// At this point I need to create my if condition
// to compare the value of the value var and the wrong one
// With the given comparison operator
}
// Should return the comparison test value
return true
}
Is that possible to create an if condition like that?
With the reflect package?
I think this is possible but I don't find where I should start.
This is possible. I built a generic comparison library like this once before.
A comparison, in simple terms, contains 3 parts:
A value of some sort, on the left of the comparison.
An operator (=, <, >, ...).
A value of some sort, on the right of the comparison.
Those 3 parts, contain only two different types - value and operator. I attempted to abstract those two types into their base forms.
value could be anything, so we use the empty interface - interface{}.
operator is part of a finite set, each with their own rules.
type Operator int
const (
Equals Operator = 1
)
Evaluating a comparison with an = sign has only one rule to be valid - both values should be of the same type. You can't compare 1 and hello. After that, you just have to make sure the values are the same.
We can implement a new meta-type that wraps the requirement for evaluating an operator.
// Function signature for a "rule" of an operator.
type validFn func(left, right interface{}) bool
// Function signature for evaluating an operator comparison.
type evalFn func(left, right interface{}) bool
type operatorMeta struct {
valid []validFn
eval evalFn
}
Now that we've defined our types, we need to implement the rules and comparison functions for Equals.
func sameTypes(left, right interface{}) bool {
return reflect.TypeOf(left).Kind() == reflect.TypeOf(right).Kind()
}
func equals(left, right interface{}) bool {
return reflect.DeepEqual(left, right)
}
Awesome! So we can now validate that our two values are of the same type, and we can compare them against each other if they are. The last piece of the puzzle, is mapping the operator to its appropriate rules and evaluation and having a function to execute all of this logic.
var args = map[Operator]operatorMeta{
Equals: {
valid: []validFn{sameTypes},
eval: equals,
},
}
func compare(o Operator, left, right interface{}) (bool, error) {
opArgs, ok := args[o]
if !ok {
// You haven't implemented logic for this operator.
}
for _, validFn := range opArgs.valid {
if !validFn(left, right) {
// One of the rules were not satisfied.
}
}
return opArgs.eval(left, right), nil
}
Let's summarize what we have so far:
Abstracted a basic comparison into a value and operator.
Created a way to validate whether a pair of values are valid for an operator.
Created a way to evaluate an operator, given two values.
(Go Playground)
I hope that I gave some insight into how you can approach this. It's a simple idea, but can take some boilerplate to get working properly.
Good luck!

Why to use "redundant" keyword "struct" for types in Go?

I am a big fan of Golang, and very pleased to how the syntax of Go is designed. As a part of syntax philosophy, we have a rule as following: omit the things (keywords, characters etc.) if they are not needed actually.
For that reason instead of writing redundant colons:
for ; sum < 1000; {
sum += sum
}
You allowed to simply put:
for sum < 1000 {
sum += sum
}
notice how we omitted redundant semicolons
And there are lots of other cases where syntax is gratefully simplified.
But what about struct when we define type?
type Person struct {
name string
}
Why do we need to put struct keyword here?
Keywords are to determine intention, to clarify the exact choice of available options so a compiler knows how to do his job properly.
Will it be unclear and ambiguous if we simply put:
type Person {
name string
}
??
I believe there is a meaning for struct in the examples above
because compiler fails when type defined without struct keyword.
Please, explain me (and provide links) what else we can use instead of struct when we define some type.
Please, list available options from which we want to clarify to a compiler that things in curly brackets after type name are exactly parts of a struct and not something else (what else?).
Thanks.
It's not redundant. You can make types from existing types:
type MyType int
type MyType string
Or interfaces:
type Stringer interface {
String() string
}
This is covered in the Go tour and in the spec.
Types (may) not only appear in type declarations, but in countless other places, for example in function declarations.
Structs may be "used" anonymously, without creating a named type for them. For example, the following declaration is valid:
func GetPoint() struct{ x, y int } {
return struct{ x, y int }{1, 2}
}
Without having to use the struct keyword, a parsing ambiguity would arise in multiple uses. Let's say we want to create a function which returns an empty struct:
func GetEmpty() struct{} {
return struct{}{}
}
How would this look like without the struct keyword?
func GetEmpty2() {} {
return {}{}
}
Now if you're the compiler, what would you make out of this? Is this a function with the same signature as GetEmpty()? Or is this a function without a return value and an empty body (func GetEmpty2() {}) followed by a block which contains a return statement? The return statement would be another ambiguity, as it may return nothing which is followed by 2 empty blocks, or it may return an empty struct value which is followed by an empty block...
Now to avoid parsing ambiguity, we have to use the struct keyword when specifying struct types elsewhere (outside of type declarations), then why make it optional or disallow it in type declarations?
I think a consistent syntax is more important than grabbing all chances to reduce the language (syntax) to the minimum possible. That hurts readability big time. The for loop example you mentioned is not really a simplification, but rather the usage of different forms of the for loop.

Explicit and implicit conversion

I am pretty surprised that this struct, which is only explicitly convertible to bool, works fine inside a if statement:
struct A
{
explicit operator bool( ) const
{
return m_i % 2 == 0;
}
int m_i;
};
int main()
{
A a{ 10 };
if ( a ) // this is considered explicit
{
bool b = a; // this is considered implicit
// and therefore does not compile
}
return 0;
}
Why is it so? What is the design reason behind it in the C++ Standard?
I personally find more explicit the second conversion than the first one. To make it even more clear, I would have expected the compiler forcing to have the following for both the cases:
int main()
{
A a{ 10 };
if ( (bool)a )
{
bool b = (bool)a;
}
return 0;
}
§6.4 Selection statements [stmt.select]
The value of a condition that is an expression is the value of the expression, contextually converted to bool for statements other than switch;
§4 Standard conversions [conv]
Certain language constructs require that an expression be converted to
a Boolean value. An expression e appearing in such a context is said
to be contextually converted to bool and is well-formed if and only
if the declaration bool t(e); is well-formed, for some invented
temporary variable t (8.5).
So the expression of the condition in if must be contextually convertible to bool, which means that explicit conversions are allowed.
This is mode most likely done because the condition of if can only evaluate to a boolean value, so by saying if(cond) you are explicitly stating you want cond to be evaluated to a boolean value.

Understanding enum and function signature

In learning Swift, I came across this code: -
enum ServerResponse {
case Result(String, String)
case Error(String)
}
for i in 1...10{
let mySuccess: ServerResponse = {
let zeroOrOne = rand() % 2
if zeroOrOne == 0 {
return ServerResponse.Result("7:00 am", "8.09 pm")
} else {
return ServerResponse.Error("Out of cheese.")
}
}()
var serverResponse: String
switch mySuccess {
case let .Result(sunrise, sunset):
serverResponse = "Sunrise is at \(sunrise) and sunset as \(sunset)"
case let .Error(error):
serverResponse = "Failure... \(error)"
}
println(serverResponse)
}
As can be seen here, there are parentheses () after the closing end brace of the declaration for:
let mySuccess: ServerResponse = {
...
}()
Without the parenthesis, playground produces the error:-
Function produces expected type 'ServerResponse'; did you mean to call it with ()?
Considering a function has the signature: -
func name(param) -> returnType
Can someone please explain why the parenthesis are required here? Is it a form of minimised closure, or something else?
It's an anonymous function/lambda/closure (however you want to call it exactly), taking no argument, and whose return type is inferred by the compiler, which is then called immediately. It's similar to (function() {…})() in JavaScript.
It has the big advantage of allowing you to define mySuccess as a constant instead of a variable. Additionally, it creates a scope, such that intermediary variables (like zeroOrOne) are not visible outside.
What I'm wondering is just why the author of this code didn't use the same style to define and assign serverResponse…
Your ServerResponse is not a function, it is an enum, but without the parentheses the block you would be trying to assign to mySuccess IS a function (that returns a ServerResponse), and therefore cannot be assigned to a ServerResponse. The result of calling the function (adding the parentheses) can be.

Resources