Why is a braced-init-list not an expression? - expression

While I am reading page93 $5.1.2 of the C++11 standard, during which it said it is ellegal for you to use the braced-init-list in this case:
auto x=[]{return {1,2}}; //error: a braced-init-list is not an expression
And I have found these two topics, one from the standard and the other from N3681 proposal.
Page397 $14.8.2.5:an initializer list argument causes the parameter to be considered a non-deduced context.
and $7.6.1.4:replacing the occurrences of auto with either a new invented type template parameter U or, if the initializer is a braced-init-list (8.5.4), with std::initializer_list.
While the N3691 proposal suggested "to change a brace-initialized auto to not deduce to an initializer list, and to ban brace-initialized auto for cases where the braced-initializer has more than one element. " and it said "returning a braced-list won't work as it's not an expression"
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3681.html
However, I failed to find "Why is a braced-init-list not an expression?" It may have the same meaning of this topic:
Why can't we have automatically deduced return types?
but there's a little differences while he was trying to understand why the C++ commitee concluded this kind of grammar was worthless. So there must be a particular reason for this?Thank you very much.

Quoting from http://www.stroustrup.com/default-argument.pdf:
The reason that an initializer list isn’t an expression is simply that
we decided (correctly, IMO) not to allow initializer lists on the left
hand side of assignments, as operands of ++, etc. and further decided
(again correctly, IMO) to enforce that through the grammar.

Related

Is F# Constructed Type syntax special?

I was curious about F#'s "constructed type" syntax. It's documented here.
type-argument generic-type-name
or
generic-type-name
With the following examples:
int option
string list
int ref
option<int>
list<string>
ref<int>
Dictionary<int, string>
I was curious if there's anything special about the "backwards" syntax, with the parameter before the type, or if it's just sugar for generic types with one parameter. The following is valid:
type 'a MyOption = // MyOption<'a> also works
| MySome of 'a
| MyNone
But I could not get it to work with multiple type parameters. Why do F# developers prefer this syntax for types with one parameter? Is it possible or desirable to make it work with two?
The backwards syntax is a legacy from OCaml. Personally, I never use it. If you really want to, you can make it work with multiple type arguments like this:
type MyMap = (int, string) Map
However, this generates a pointed warning (that might soon become an error):
This construct is for ML compatibility. The syntax '(typ,...,typ) ident' is not used in F# code. Consider using 'ident<typ,...,typ>' instead. You can disable this warning by using '--mlcompatibility' or '--nowarn:62'.
Bottom line, I would recommend always using .NET syntax instead: MyOption<'a> instead of 'a MyOption.
Why do F# developers prefer this syntax for types with one parameter?
Not all of us do. I love F# and am in awe of it, but find the OCaml style distracting.
It gets especially confusing when the two styles are mixed - compare the readability of Async<Result<int,string list>> list with that of List<Async<Result<int,List<string>>>>.
Here is a thread with some arguments from both sides from fslang suggestions, which I think led to the deprecation of OCaml-style for everything but list, option and a few others.
I find it regrettable that the OCaml style is specified as the preferred option (for these types) in the various style guides, and used throughout the core libraries, while there is such a strong drive to make the language more accessible to newcomers. It definitely adds to the learning curve, as documented in this question,
and here,
here,
here,
here,
here.
Is it possible or desirable to make [OCaml style naming] work with two [type parameters]?
I think a better question is: "Is it possible to only use .NET style?".
Unfortunately the tooling shows types the way they are declared, and the core libraries consistently use OCaml style. I have asked Rider about always showing declarations .NET style in code vision, who referred me to FSharp compiler services. I have not (yet) investigated that avenue further.
In our own code we have taken to overriding the OCaml signatures of functions that ship with F# and other libraries as we come across them, for example:
[<AutoOpen>]
module NoCaml =
module List =
/// Returns a new collection containing only the elements of the collection for which the given predicate returns "true"
let filter = List.filter : ('a -> bool) -> List<'a> -> List<'a>
/// val groupBy : projection:('T -> 'Key) -> list:'T list -> ('Key * 'T list) list (requires equality and equality) 'T is 'a 'Key is 'b Applies a key-generating function to each element of a list and yields a list of unique keys. Each unique key contains a list of all elements that match to this key.
let groupBy = List.groupBy : ('a -> 'b) -> List<'a> -> List<'b * List<'a>>
// etc.
This solves the problem in almost all cases (some exceptions like list construction using [] remain, and need to be overridden at the point of declaration).
I'm not sure what influence this has on performance at runtime - hopefully the extra function calls are optimised away.

If and cond as special forms

Consider these two paragraphs from SICP:
This construct is called a case analysis, and there is a special form
in Lisp for notating such a case analysis. It is called cond (which
stands for “conditional”), and it is used as follows:
...
This uses the special form if, a restricted type of conditional that
can be used when there are precisely two cases in the case analysis.
What does type mean in this context (restricted type of conditional)? Does it mean:
"if" is a type of "cond"? Because the sentence states "there is a special form", so there is only one special form because "if" is one type of "cond".
Both "if" and "cond" are unrelated. They are both conditionals. If this is correct, why does this sentence say "there is a special form" like it is only one?
In "&hairsp;if [is] a restricted type of conditional", I believe "conditional" doesn't mean specifically cond; it means "conditional statement / expression", in general.
So there are two, cond and if. Each can be defined in terms of the other, so a given implementation may choose to have only one of them as a primitive, defining the other in terms of it; or the implementation can choose to have both of them as primitive special forms.
Special forms are handled specially by the interpreter (compiler) itself.
Macros also can be used for that. They won't be handled by the interpreter itself then, but rather by its macros-handling mechanism.
So if is a conditional; cond is a conditional; cond can have any number of clauses; if must have exactly two (or one or two, depending on the standard) clauses; all the rest is just English. :)

Why is the behavior of decltype defined the way it is?

From C++ Draft Standard N3337:
7.1.6.2 Simple type specifiers
4 The type denoted by decltype(e) is defined as follows:
— if e is an unparenthesized id-expression or an unparenthesized class member access (5.2.5), decltype(e) is the type of the entity named by e. If there is no such entity, or if e names a set of overloaded functions, the program is ill-formed;
— otherwise, if e is an xvalue, decltype(e) is T&&, where T is the type of e;
— otherwise, if e is an lvalue, decltype(e) is T&, where T is the type of e;
If I understand the above correctly,
int a;
decltype(a) b = 10; // type of b is int
int* ap;
decltype(*ap) c = 10; // Does not work since type of c is int&
Can you explain, or point to some documentation that explains, why decltype(*ap) couldn't have been just int?
The standardization effort of decltype was a herculean effort spanning many years. There were 7 versions of this paper before the committee finally accepted it. The versions were:
N1478 2003-04-28
N1527 2003-09-21
N1607 2004-02-17
N1705 2004-09-12
N1978 2006-04-24
N2115 2006-11-05
N2343 2007-07-18
Remarkably, the seeds of the behavior which you question are in the very first revision: N1478, which introduces the need for "two types of typeof: either to preserve or to drop references in types."
This paper goes on to give a rationale for the reference-preserving variant, including this quote:
On the other hand, the reference-dropping semantics fails to provide a
mechanism for exactly expressing the return types of generic
functions, as demonstrated by Strous- trup [Str02]. This implies that
a reference-dropping typeof would cause problems for writers of
generic libraries.
There is no substitute for reading through these papers. However one might summarize that decltype serves two purposes:
To report the declared type of an identifier.
To report the type of an expression.
For the second use case, recall that expressions are never reference types, but are instead one of lvalues, xvalues, or prvalues. By convention, when decltype is reporting the type of an lvalue expression, it makes the type an lvalue reference, and when the expression is an xvalue, the reported type becomes an rvalue reference.
In your example, *ap is an expression, whereas a is an identifier. So your example makes use of both use cases, as first introduced in N1478.
It is also instructive to note that decltype was not designed in isolation. The rest of the C++ language was evolving during this time period (e.g. rvalue references), and the design of decltype was iterated to keep pace.
Also note that once the decltype proposal was accepted, it continued (and continues to this day) to evolve. See this list of issues:
http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_index.html
specifically section 7.1.6.2 (which is the section where the bulk of the decltype specification lives).

Unwanted evaluation in assignments in Mathematica: why it happens and how to debug it during the package-loading?

I am developing a (large) package which does not load properly anymore.
This happened after I changed a single line of code.
When I attempt to load the package (with Needs), the package starts loading and then one of the setdelayed definitions “comes alive” (ie. Is somehow evaluated), gets trapped in an error trapping routine loaded a few lines before and the package loading aborts.
The error trapping routine with abort is doing its job, except that it should not have been called in the first place, during the package loading phase.
The error message reveals that the wrong argument is in fact a pattern expression which I use on the lhs of a setdelayed definition a few lines later.
Something like this:
……Some code lines
Changed line of code
g[x_?NotGoodQ]:=(Message[g::nogood, x];Abort[])
……..some other code lines
g/: cccQ[g[x0_]]:=True
When I attempt to load the package, I get:
g::nogood: Argument x0_ is not good
As you see the passed argument is a pattern and it can only come from the code line above.
I tried to find the reason for this behavior, but I have been unsuccessful so far.
So I decided to use the powerful Workbench debugging tools .
I would like to see step by step (or with breakpoints) what happens when I load the package.
I am not yet too familiar with WB, but it seems that ,using Debug as…, the package is first loaded and then eventually debugged with breakpoints, ect.
My problem is that the package does not even load completely! And any breakpoint set before loading the package does not seem to be effective.
So…2 questions:
can anybody please explain why these code lines "come alive" during package loading? (there are no obvious syntax errors or code fragments left in the package as far as I can see)
can anybody please explain how (if) is possible to examine/debug
package code while being loaded in WB?
Thank you for any help.
Edit
In light of Leonid's answer and using his EvenQ example:
We can avoid using Holdpattern simply by definying upvalues for g BEFORE downvalues for g
notGoodQ[x_] := EvenQ[x];
Clear[g];
g /: cccQ[g[x0_]] := True
g[x_?notGoodQ] := (Message[g::nogood, x]; Abort[])
Now
?g
Global`g
cccQ[g[x0_]]^:=True
g[x_?notGoodQ]:=(Message[g::nogood,x];Abort[])
In[6]:= cccQ[g[1]]
Out[6]= True
while
In[7]:= cccQ[g[2]]
During evaluation of In[7]:= g::nogood: -- Message text not found -- (2)
Out[7]= $Aborted
So...general rule:
When writing a function g, first define upvalues for g, then define downvalues for g, otherwise use Holdpattern
Can you subscribe to this rule?
Leonid says that using Holdpattern might indicate improvable design. Besides the solution indicated above, how could one improve the design of the little code above or, better, in general when dealing with upvalues?
Thank you for your help
Leaving aside the WB (which is not really needed to answer your question) - the problem seems to have a straightforward answer based only on how expressions are evaluated during assignments. Here is an example:
In[1505]:=
notGoodQ[x_]:=True;
Clear[g];
g[x_?notGoodQ]:=(Message[g::nogood,x];Abort[])
In[1509]:= g/:cccQ[g[x0_]]:=True
During evaluation of In[1509]:= g::nogood: -- Message text not found -- (x0_)
Out[1509]= $Aborted
To make it work, I deliberately made a definition for notGoodQ to always return True. Now, why was g[x0_] evaluated during the assignment through TagSetDelayed? The answer is that, while TagSetDelayed (as well as SetDelayed) in an assignment h/:f[h[elem1,...,elemn]]:=... does not apply any rules that f may have, it will evaluate h[elem1,...,elem2], as well as f. Here is an example:
In[1513]:=
ClearAll[h,f];
h[___]:=Print["Evaluated"];
In[1515]:= h/:f[h[1,2]]:=3
During evaluation of In[1515]:= Evaluated
During evaluation of In[1515]:= TagSetDelayed::tagnf: Tag h not found in f[Null]. >>
Out[1515]= $Failed
The fact that TagSetDelayed is HoldAll does not mean that it does not evaluate its arguments - it only means that the arguments arrive to it unevaluated, and whether or not they will be evaluated depends on the semantics of TagSetDelayed (which I briefly described above). The same holds for SetDelayed, so the commonly used statement that it "does not evaluate its arguments" is not literally correct. A more correct statement is that it receives the arguments unevaluated and does evaluate them in a special way - not evaluate the r.h.s, while for l.h.s., evaluate head and elements but not apply rules for the head. To avoid that, you may wrap things in HoldPattern, like this:
Clear[g,notGoodQ];
notGoodQ[x_]:=EvenQ[x];
g[x_?notGoodQ]:=(Message[g::nogood,x];Abort[])
g/:cccQ[HoldPattern[g[x0_]]]:=True;
This goes through. Here is some usage:
In[1527]:= cccQ[g[1]]
Out[1527]= True
In[1528]:= cccQ[g[2]]
During evaluation of In[1528]:= g::nogood: -- Message text not found -- (2)
Out[1528]= $Aborted
Note however that the need for HoldPattern inside your left-hand side when making a definition is often a sign that the expression inside your head may also evaluate during the function call, which may break your code. Here is an example of what I mean:
In[1532]:=
ClearAll[f,h];
f[x_]:=x^2;
f/:h[HoldPattern[f[y_]]]:=y^4;
This code attempts to catch cases like h[f[something]], but it will obviously fail since f[something] will evaluate before the evaluation comes to h:
In[1535]:= h[f[5]]
Out[1535]= h[25]
For me, the need for HoldPattern on the l.h.s. is a sign that I need to reconsider my design.
EDIT
Regarding debugging during loading in WB, one thing you can do (IIRC, can not check right now) is to use good old print statements, the output of which will appear in the WB's console. Personally, I rarely feel a need for debugger for this purpose (debugging package when loading)
EDIT 2
In response to the edit in the question:
Regarding the order of definitions: yes, you can do this, and it solves this particular problem. But, generally, this isn't robust, and I would not consider it a good general method. It is hard to give a definite advice for a case at hand, since it is a bit out of its context, but it seems to me that the use of UpValues here is unjustified. If this is done for error - handling, there are other ways to do it without using UpValues.
Generally, UpValues are used most commonly to overload some function in a safe way, without adding any rule to the function being overloaded. One advice is to avoid associating UpValues with heads which also have DownValues and may evaluate -by doing this you start playing a game with evaluator, and will eventually lose. The safest is to attach UpValues to inert symbols (heads, containers), which often represent a "type" of objects on which you want to overload a given function.
Regarding my comment on the presence of HoldPattern indicating a bad design. There certainly are legitimate uses for HoldPattern, such as this (somewhat artificial) one:
In[25]:=
Clear[ff,a,b,c];
ff[HoldPattern[Plus[x__]]]:={x};
ff[a+b+c]
Out[27]= {a,b,c}
Here it is justified because in many cases Plus remains unevaluated, and is useful in its unevaluated form - since one can deduce that it represents a sum. We need HoldPattern here because of the way Plus is defined on a single argument, and because a pattern happens to be a single argument (even though it describes generally multiple arguments) during the definition. So, we use HoldPattern here to prevent treating the pattern as normal argument, but this is mostly different from the intended use cases for Plus. Whenever this is the case (we are sure that the definition will work all right for intended use cases), HoldPattern is fine. Note b.t.w., that this example is also fragile:
In[28]:= ff[Plus[a]]
Out[28]= ff[a]
The reason why it is still mostly OK is that normally we don't use Plus on a single argument.
But, there is a second group of cases, where the structure of usually supplied arguments is the same as the structure of patterns used for the definition. In this case, pattern evaluation during the assignment indicates that the same evaluation will happen with actual arguments during the function calls. Your usage falls into this category. My comment for a design flaw was for such cases - you can prevent the pattern from evaluating, but you will have to prevent the arguments from evaluating as well, to make this work. And pattern-matching against not completely evaluated expression is fragile. Also, the function should never assume some extra conditions (beyond what it can type-check) for the arguments.

Any reason NOT to always use keyword arguments?

Before jumping into python, I had started with some Objective-C / Cocoa books. As I recall, most functions required keyword arguments to be explicitly stated. Until recently I forgot all about this, and just used positional arguments in Python. But lately, I've ran into a few bugs which resulted from improper positions - sneaky little things they were.
Got me thinking - generally speaking, unless there is a circumstance that specifically requires non-keyword arguments - is there any good reason NOT to use keyword arguments? Is it considered bad style to always use them, even for simple functions?
I feel like as most of my 50-line programs have been scaling to 500 or more lines regularly, if I just get accustomed to always using keyword arguments, the code will be more easily readable and maintainable as it grows. Any reason this might not be so?
UPDATE:
The general impression I am getting is that its a style preference, with many good arguments that they should generally not be used for very simple arguments, but are otherwise consistent with good style. Before accepting I just want to clarify though - is there any specific non-style problems that arise from this method - for instance, significant performance hits?
There isn't any reason not to use keyword arguments apart from the clarity and readability of the code. The choice of whether to use keywords should be based on whether the keyword adds additional useful information when reading the code or not.
I follow the following general rule:
If it is hard to infer the function (name) of the argument from the function name – pass it by keyword (e.g. I wouldn't want to have text.splitlines(True) in my code).
If it is hard to infer the order of the arguments, for example if you have too many arguments, or when you have independent optional arguments – pass it by keyword (e.g. funkyplot(x, y, None, None, None, None, None, None, 'red') doesn't look particularly nice).
Never pass the first few arguments by keyword if the purpose of the argument is obvious. You see, sin(2*pi) is better than sin(value=2*pi), the same is true for plot(x, y, z).
In most cases, stable mandatory arguments would be positional, and optional arguments would be keyword.
There's also a possible difference in performance, because in every implementation the keyword arguments would be slightly slower, but considering this would be generally a premature optimisation and the results from it wouldn't be significant, I don't think it's crucial for the decision.
UPDATE: Non-stylistical concerns
Keyword arguments can do everything that positional arguments can, and if you're defining a new API there are no technical disadvantages apart from possible performance issues. However, you might have little issues if you're combining your code with existing elements.
Consider the following:
If you make your function take keyword arguments, that becomes part of your interface.
You can't replace your function with another that has a similar signature but a different keyword for the same argument.
You might want to use a decorator or another utility on your function that assumes that your function takes a positional argument. Unbound methods are an example of such utility because they always pass the first argument as positional after reading it as positional, so cls.method(self=cls_instance) doesn't work even if there is an argument self in the definition.
None of these would be a real issue if you design your API well and document the use of keyword arguments, especially if you're not designing something that should be interchangeable with something that already exists.
If your consideration is to improve readability of function calls, why not simply declare functions as normal, e.g.
def test(x, y):
print "x:", x
print "y:", y
And simply call functions by declaring the names explicitly, like so:
test(y=4, x=1)
Which obviously gives you the output:
x: 1
y: 4
or this exercise would be pointless.
This avoids having arguments be optional and needing default values (unless you want them to be, in which case just go ahead with the keyword arguments! :) and gives you all the versatility and improved readability of named arguments that are not limited by order.
Well, there are a few reasons why I would not do that.
If all your arguments are keyword arguments, it increases noise in the code and it might remove clarity about which arguments are required and which ones are optionnal.
Also, if I have to use your code, I might want to kill you !! (Just kidding), but having to type the name of all the parameters everytime... not so fun.
Just to offer a different argument, I think there are some cases in which named parameters might improve readability. For example, imagine a function that creates a user in your system:
create_user("George", "Martin", "g.m#example.com", "payments#example.com", "1", "Radius Circle")
From that definition, it is not at all clear what these values might mean, even though they are all required, however with named parameters it is always obvious:
create_user(
first_name="George",
last_name="Martin",
contact_email="g.m#example.com",
billing_email="payments#example.com",
street_number="1",
street_name="Radius Circle")
I remember reading a very good explanation of "options" in UNIX programs: "Options are meant to be optional, a program should be able to run without any options at all".
The same principle could be applied to keyword arguments in Python.
These kind of arguments should allow a user to "customize" the function call, but a function should be able to be called without any implicit keyword-value argument pairs at all.
Sometimes, things should be simple because they are simple.
If you always enforce you to use keyword arguments on every function call, soon your code will be unreadable.
When Python's built-in compile() and __import__() functions gain keyword argument support, the same argument was made in favor of clarity. There appears to be no significant performance hit, if any.
Now, if you make your functions only accept keyword arguments (as opposed to passing the positional parameters using keywords when calling them, which is allowed), then yes, it'd be annoying.
I don't see the purpose of using keyword arguments when the meaning of the arguments is obvious
Keyword args are good when you have long parameter lists with no well defined order (that you can't easily come up with a clear scheme to remember); however there are many situations where using them is overkill or makes the program less clear.
First, sometimes is much easier to remember the order of keywords than the names of keyword arguments, and specifying the names of arguments could make it less clear. Take randint from scipy.random with the following docstring:
randint(low, high=None, size=None)
Return random integers x such that low <= x < high.
If high is None, then 0 <= x < low.
When wanting to generate a random int from [0,10) its clearer to write randint(10) than randint(low=10) in my view. If you need to generate an array with 100 numbers in [0,10) you can probably remember the argument order and write randint(0, 10, 100). However, you may not remember the variable names (e.g., is the first parameter low, lower, start, min, minimum) and once you have to look up the parameter names, you might as well not use them (as you just looked up the proper order).
Also consider variadic functions (ones with variable number of parameters that are anonymous themselves). E.g., you may want to write something like:
def square_sum(*params):
sq_sum = 0
for p in params:
sq_sum += p*p
return sq_sum
that can be applied a bunch of bare parameters (square_sum(1,2,3,4,5) # gives 55 ). Sure you could have written the function to take an named keyword iterable def square_sum(params): and called it like square_sum([1,2,3,4,5]) but that may be less intuitive, especially when there's no potential confusion about the argument name or its contents.
A mistake I often do is that I forget that positional arguments have to be specified before any keyword arguments, when calling a function. If testing is a function, then:
testing(arg = 20, 56)
gives a SyntaxError message; something like:
SyntaxError: non-keyword arg after keyword arg
It is easy to fix of course, it's just annoying. So in the case of few - lines programs as the ones you mention, I would probably just go with positional arguments after giving nice, descriptive names to the parameters of the function. I don't know if what I mention is that big of a problem though.
One downside I could see is that you'd have to think of a sensible default value for everything, and in many cases there might not be any sensible default value (including None). Then you would feel obliged to write a whole lot of error handling code for the cases where a kwarg that logically should be a positional arg was left unspecified.
Imagine writing stuff like this every time..
def logarithm(x=None):
if x is None:
raise TypeError("You can't do log(None), sorry!")

Resources