How can I use rules suggested by solve_direct? (by (rule …) doesn't always work) - solver

Sometimes <statement> solve_direct (which I usually invoke via <statement> try) lists a number of library theorems and says “The current goal can be solved directly with: …”.
Let <theorem> be one search result of solve_direct, then in most cases I can prove <statement> by (rule theorem).
Sometimes, however, such a proof is not accepted, resulting in the error message “Failed to apply initial proof method”.
Is there a general, different technique for reusing theorems found by solve_direct?
Or does it depend on the individual situation? I could try to work out a minimal example and attach it to this question.

Personally, I just tend to just use:
apply (metis thm)
which works most of the time without forcing me to think very hard (but will still occasionally fail if tricky resolution is required).
Other methods that will also typically work include:
apply (rule thm) (* If "thm" has no premises. *)
apply (erule thm) (* If "thm" has a single premise. *)
apply (erule thm, assumption+) (* If "thm" has multiple premises. *)
Why is there no one single answer? The answer is a little complex:
Internally, solve_direct calls find_theorems solves, which then performs the following:
fun etacn thm i = Seq.take (! tac_limit) o etac thm i;
(* ... *)
if Thm.no_prems thm then rtac thm 1 goal
else (etacn thm THEN_ALL_NEW (Goal.norm_hhf_tac THEN' Method.assm_tac ctxt)) 1 goal;
This is the ML code for something similar to rule thm if there are no premises on the rule, or:
apply (erule thm, assumption+)
if there are multiple premises on the rule. As commented by Brian on your question, the above might still fail if there are complex meta-logical connectives in the assumptions (which the norm_hhf_tac deals with, but is not directly exposed as an Isabelle method as far as I am aware).
If you wanted, you could write a new method that exposes the tactic used by find_theorems directly, as follows:
ML {*
fun solve_direct_tac thm ctxt goal =
if Thm.no_prems thm then rtac thm 1 goal
else (etac thm THEN_ALL_NEW (Goal.norm_hhf_tac THEN' Method.assm_tac ctxt)) 1 goal;
*}
method_setup solve =
{* Attrib.thm >> (fn thm => fn ctxt =>
SIMPLE_METHOD' (K (solve_direct_tac thm ctxt ))) *}
"directly solve a rule"
This could then be used as follows:
lemma "⟦ a; b ⟧ ⟹ a ∧ b"
by (solve conjI)
which should hopefully solve anything solve_direct throws at you.

I found another way of using solve_direct's suggestions with by rule. When certain very basic rules from the library, such as Hilbert_Choice.someI2, are suggested, it seems that one of the facts in context actually is a rule itself, which may be applicable. The following worked for me at least in two concrete situations (source):
re-examine the “rule-like” fact, the other facts (if any) and the goal
if necessary, reorder the other facts
do the proof using <other_facts> ... by (rule <rule-like-fact>)

You can try fact or rule_tac. If I recall correctly, rule sometimes fails to apply a given rule in the presence of other facts and I am not entirely sure why; that question will have to be answered by someone who is more familiar with the implementation details of these methods than I am.

Related

Call by name vs normal order

I know this topic has been discussed several times, but there is something still unclear to me.
I've read this question applicative-order/call-by-value and normal-order/call-by-name differences and there is something I would to clarify once and for all:
Call-by-name
As normal order, but no reductions are performed inside abstractions. For example λx.(λx.x)x is in normal form according to this strategy, although it contains the redex (λx.x)x.
In call by name, the expression λx.(λx.x)x is said to be in normal form; is this because "(λx.x)x" is considered to be the body (since the scope of λ extends as far as possible to the right)? And so on the other side, if I apply the normal order, what would be the result?
In call by name, the expression λx.(λx.x)x is said to be in normal form; is this because "(λx.x)x" is considered to be the body (since the scope of λ extends as far as possible to the right)?
Yes, you are right.
And so on the other side, if I apply the normal order, what would be the result?
You do reduction inside the body: (λx.x)x -> x, so the whole thing reduces to the identity function:
λx.(λx.x)x -> λx.x
To clarify it a bit further, let me do this one more time, renaming the variables to conform with the Barendregt variable convention: λx.(λx.x)x =α λx.(λy.y)x:
λx.(λy.y)x -> λx.[y := x](y) = λx.x

Prolog: Rules with nothing but anonymous variables in the head, and no body

Prolog's grammar uses a <head> :- <body> format for rules as such:
tree(G) :- acyclic(G) , connected(G).
, denoting status of G as a tree depends on status as acyclic and connected.
This grammar can be extended in an implicit fashion to facts. Following the same example:
connected(graphA) suggests connected(graphA):-true.
In this sense, one might loosely define Prolog facts as Prolog rules that are always true.
My question: Is in any context a bodiless rule (one that is presumed to be true under all conditions) ever appropriate? Syntactically such a rule would look as follows.
graph(X). (suggesting graph(X):-true.)
Before answering, to rephrase your question:
In Prolog, would you ever write a rule with nothing but anonymous variables in the head, and no body?
The terminology is kind of important here. Facts are simply rules that have only a head and no body (which is why your question is a bit confusing). Anonymous variables are variables that you explicitly tell the compiler to ignore in the context of a predicate clause (a predicate clause is the syntactical scope of a variable). If you did try to give this predicate clause to the Prolog compiler:
foo(Bar).
you will get a "singleton variable" warning. Instead, you can write
foo(_).
and this tells the compiler that this argument is ignored on purpose, and no variable binding should be attempted with it.
Operationally, what happens when Prolog tries to prove a rule?
First, unification of all arguments in the head of the rule, which might lead to new variable bindings;
Then, it tries to prove the body of the rule using all existing variable bindings.
As you can see, the second step makes this a recursively defined algorithm: proving the body of a rule means proving each rule in it.
To come to your question: what is the operational meaning of this:
foo(_).
There is a predicate foo/1, and it is true for any argument, because there are no variable bindings to be done in the head, and always, because no subgoals need to be proven.
I have seen at least one use of such a rule: look at the very bottom of this section of the SWI-Prolog manual. The small code example goes like this:
term_expansion(my_class(_), Clauses) :-
findall(my_class(C),
string_code(_, "~!##$", C),
Clauses).
my_class(_).
You should read the linked documentation to see the motivation for doing this. The purpose of the code itself is to add at compile time a table of facts to the Prolog database. This is done by term expansion, a mechanism for code transformations, usually used through term_expansion/2. You need the definition of my_class/1 so that term_expansion/2 can pick it up, transform it, and replace it with the expanded code. I strongly suggest you take the snipped above, put it in a file, consult it and use listing/1 to see what is the effect. I get:
?- listing(my_class).
my_class(126).
my_class(33).
my_class(64).
my_class(35).
my_class(36).
true.
NB: In this example, you could replace the two occurrences of my_class(_) with anything. You could have just as well written:
term_expansion(foobar, Clauses) :-
findall(my_class(C),
string_code(_, "~!##$", C),
Clauses).
foobar.
The end result is identical, because the operational meaning is identical. However, using my_class(_) is self-documenting, and makes the intention of the code more obvious, at least to an experienced Prolog developer as the author of SWI-Prolog ;).
A fact is just a bodiless rule, as you call it. And yes, there are plenty of use cases for bodiless facts:
representing static data
base cases for recursion
instead of some curly brace language pseudo code
boolean is_three(integer x) {
if (x == 3) { return true; }
else { return false; }
}
we can simply write
is_three(3).
This is often how the base case of a recursive definition is expressed.
To highlight what I was initially looking for, I'll include the following short answer for those who might find themselves asking my initial question in the future.
An example of a bodiless rule is, as #Anniepoo suggested, a base case for a recursive definition. Look to the example of a predicate, member(X,L) for illustration:
member(X,[X|T]). /* recursive base case*/
member(X,[H|T]):- member(X,T).
Here, the first entry of the member rule represents a terminating base case-- the item of interest X matching to the head of the remaining list.
I suggest visiting #Boris's answer (accepted) for a more complete treatment.

Times and NonCommutativeMultiply, handing the difference automatically

I've got some symbols which should are non-commutative, but I don't want to have to remember which expressions have this behaviour whilst constructing equations.
I've had the thought to use MakeExpression to act on the raw boxes, and automatically uplift multiply to non-commutative multiply when appropriate (for instance when some of the symbols are non-commutative objects).
I was wondering whether anyone had any experience with this kind of configuration.
Here's what I've got so far:
(* Detect whether a set of row boxes represents a multiplication *)
Clear[isRowBoxMultiply];
isRowBoxMultiply[x_RowBox] := (Print["rowbox: ", x];
Head[ToExpression[x]] === Times)
isRowBoxMultiply[x___] := (Print["non-rowbox: ", x]; False)
(* Hook into the expression maker, so that we can capture any \
expression of the form F[x___], to see how it is composed of boxes, \
and return true or false on that basis *)
MakeExpression[
RowBox[List["F", "[", x___, "]"]], _] := (HoldComplete[
isRowBoxMultiply[x]])
(* Test a number of expressions to see whether they are automatically \
detected as multiplies or not. *)
F[a]
F[a b]
F[a*b]
F[a - b]
F[3 x]
F[x^2]
F[e f*g ** h*i j]
Clear[MakeExpression]
This appears to correctly identify expressions that are multiplication statements:
During evaluation of In[561]:= non-rowbox: a
Out[565]= False
During evaluation of In[561]:= rowbox: RowBox[{a,b}]
Out[566]= True
During evaluation of In[561]:= rowbox: RowBox[{a,*,b}]
Out[567]= True
During evaluation of In[561]:= rowbox: RowBox[{a,-,b}]
Out[568]= False
During evaluation of In[561]:= rowbox: RowBox[{3,x}]
Out[569]= True
During evaluation of In[561]:= non-rowbox: SuperscriptBox[x,2]
Out[570]= False
During evaluation of In[561]:= rowbox: RowBox[{e,f,*,RowBox[{g,**,h}],*,i,j}]
Out[571]= True
So, it looks like it's not out of the questions that I might be able to conditionally rewrite the boxes of the underlying expression; but how to do this reliably?
Take the expression RowBox[{"e","f","*",RowBox[{"g","**","h"}],"*","i","j"}], this would need to be rewritten as RowBox[{"e","**","f","**",RowBox[{"g","**","h"}],"**","i","**","j"}] which seems like a non trivial operation to do with the pattern matcher and a rule set.
I'd be grateful for any suggestions from those more experienced with me.
I'm trying to find a way of doing this without altering the default behaviour and ordering of multiply.
Thanks! :)
Joe
This is not a most direct answer to your question, but for many purposes working as low-level as directly with the boxes might be an overkill. Here is an alternative: let the Mathematica parser parse your code, and make a change then. Here is a possibility:
ClearAll[withNoncommutativeMultiply];
SetAttributes[withNoncommutativeMultiply, HoldAll];
withNoncommutativeMultiply[code_] :=
Internal`InheritedBlock[{Times},
Unprotect[Times];
Times = NonCommutativeMultiply;
Protect[Times];
code];
This replaces Times dynamically with NonCommutativeMultiply, and avoids the intricacies you mentioned. By using Internal`InheritedBlock, I make modifications to Times local to the code executed inside withNoncommutativeMultiply.
You now can automate the application of this function with $Pre:
$Pre = withNoncommutativeMultiply;
Now, for example:
In[36]:=
F[a]
F[a b]
F[a*b]
F[a-b]
F[3 x]
F[x^2]
F[e f*g**h*i j]
Out[36]= F[a]
Out[37]= F[a**b]
Out[38]= F[a**b]
Out[39]= F[a+(-1)**b]
Out[40]= F[3**x]
Out[41]= F[x^2]
Out[42]= F[e**f**g**h**i**j]
Surely, using $Pre in such manner is hardly appropriate, since in all your code multiplication will be replaced with noncommutative multiplication - I used this as an illustration. You could make a more complicated redefinition of Times, so that this would only work for certain symbols.
Here is a safer alternative based on lexical, rather than dynamic, scoping:
ClearAll[withNoncommutativeMultiplyLex];
SetAttributes[withNoncommutativeMultiplyLex, HoldAll];
withNoncommutativeMultiplyLex[code_] :=
With ## Append[
Hold[{Times = NonCommutativeMultiply}],
Unevaluated[code]]
you can use this in the same way, but only those instances of Times which are explicitly present in the code would be replaced. Again, this is just an illustration of the principles, one can extend or specialize this as needed. Instead of With, which is rather limited in its ability to specialize / add special cases, one can use replacement rules which have similar semantics.
If I understand correctly, you want to input
a b and a*b
and have MMA understand automatically that Times is really a non commutative operator (which has its own -separate - commutation rules).
Well, my suggestion is that you use the Notation package.
It is very powerful and (relatively) easy to use (especially for a sophisticated user like you seem to be).
It can be used programmatically and it can reinterpret predefined symbols like Times.
Basically it can intercept Times and change it to MyTimes. You then write code for MyTimes deciding for example which symbols are non commuting and then the output can be pretty formatted again as times or whatever else you wish.
The input and output processing are 2 lines of code. That’s it!
You have to read the documentation carefully and do some experimentation, if what you want is not more or less “standard hacking” of the input-output jobs.
Your case seems to me pretty much standard (again: If I understood well what you want to achieve) and you should find useful to read the “advanced” pages of the Notation package.
To give you an idea of how powerful and flexible the package is, I am using it to write the input-output formatting of a sizable package of Category Theory where noncommutative operations abound. But wait! I am not just defining ONE noncommutative operation, I am defining an unlimited number of noncommutative operations.
Another thing I did was to reinterpret Power when the arguments are categories, without overloading Power. This allows me to treat functorial categories using standard mathematics notation.
Now my “infinite” operations and "super Power" have the same look and feel of standard MMA symbols, including copy-paste functionality.
So, this doesn't directly answer the question, but it's does provide the sort of implementation that I was thinking about.
So, after a bit of investigation and taking on board some of #LeonidShifrin's suggestions, I've managed to implement most of what I was thinking of. The idea is that it's possible to define patterns that should be considered to be non-commuting quantities, using commutingQ[form] := False. Then any multiplicative expression (actually any expression) can be wrapped with withCommutativeSensitivity[expr] and the expression will be manipulated to separate the quantities into Times[] and NonCommutativeMultiply[] sub-expressions as appropriate,
In[1]:= commutingQ[b] ^:= False;
In[2]:= withCommutativeSensitivity[ a (a + b + 4) b (3 + a) b ]
Out[1]:= a (3 + a) (a + b + 4) ** b ** b
Of course it's possible to use $Pre = withCommutativeSensitivity to have this behaviour become default (come on Wolfram! Make it default already ;) ). It would, however, be nice to have it a more fundamental behaviour though. I'd really like to make a module and Needs[NonCommutativeQuantities] at the beginning of any note book that is needs it, and not have all the facilities that use $Pre break on me (doesn't tracing use it?).
Intuitively I feel that there must be a natural way to hook this functionality into Mathematica on at the level of box parsing and wire it up using MakeExpression[]. Am I over extending here? I'd appreciate any thoughts as to whether I'm chasing up a blind alley. (I've had a few experiments in this direction, but always get caught in a recursive definition that I can't work out how to break).
Any thoughts would be gladly received,
Joe.
Code
Unprotect[NonCommutativeMultiply];
ClearAll[NonCommutativeMultiply]
NonCommutativeMultiply[a_] := a
Protect[NonCommutativeMultiply];
ClearAll[commutingQ]
commutingQ::usage = "commutingQ[\!\(\*
StyleBox[\"expr\", \"InlineFormula\",\nFontSlant->\"Italic\"]\)] \
returns True if expr doesn't contain any constituent parts that fail \
the commutingQ test. By default all objects return True to \
commutingQ.";
commutingQ[x_] := If[Length[x] == 0, True, And ## (commutingQ /# List ## x)]
ClearAll[times2, withCommutativeSensitivity]
SetAttributes[times2, {Flat, OneIdentity, HoldAll}]
SetAttributes[withCommutativeSensitivity, HoldAll];
gatherByCriteria[list_List, crit_] :=
With[{gathered =
Gather[{#, crit[#1]} & /# list, #1[[2]] == #2[[2]] &]},
(Identity ## Union[#[[2]]] -> #[[1]] &)[Transpose[#]] & /# gathered]
times2[x__] := Module[{a, b, y = List[x]},
Times ## (gatherByCriteria[y, commutingQ] //.
{True -> Times, False -> NonCommutativeMultiply,
HoldPattern[a_ -> b_] :> a ## b})]
withCommutativeSensitivity[code_] := With ## Append[
Hold[{Times = times2, NonCommutativeMultiply = times2}],
Unevaluated[code]]
This answer does not address your question but rather the problem that leads you to ask that question. Mathematica is pretty useless when dealing with non-commuting objects but since such objects abound in, e.g., particle physics, there are some usefull packages around to deal with the situation.
Look at the grassmanOps package. They have a method to define symbols as either commuting or anti-commuting and overload the standard NonCommutativeMultiply to handle, i.e. pass through, commuting symbols. They also define several other operators, such as Derivative, to handle anti-commuting symbols. It is probably easily adapted to cover arbitrary commutation rules and it should at the very least give you an insigt into what things need to be changed if you want to roll your own.

May I write {x,a,b}//Do[...,#]& instead of Do[...,{x,a,b}]?

I'm in love with Ruby. In this language all core functions are actually methods. That's why I prefer postfix notation – when the data, which I want to process is placed left from the body of anonymous processing function, for example: array.map{...}. I believe, that it has advantages in how easy is this code to read.
But Mathetica, being functional (yeah, it can be procedural if you want) dictates a style, where Function name is placed left from the data. As we can see in its manuals, // is used only when it's some simple Function, without arguments, like list // MatrixForm. When Function needs a lot of arguments, people who wrote manuals, use syntax F[data].
It would be okay, but my problem is the case F[f,data], for example Do[function, {x, a, b}]. Most of Mathematica functions (if not all) have arguments in exactly this order – [function, data], not [data, function]. As I prefer to use pure functions to keep namespace clean instead of creating a lot of named functions in my notebook, the argument function can be too big – so big, that argument data would be placed on the 5-20th line of code after the line with Function call.
This is why sometimes, when evil Ruby nature takes me under control, I rewrite such functions in postfix way:
Because it's important for me, that pure function (potentially big code) is placed right from processing data. Yeah I do it and I'm happy. But there are two things:
this causes Mathematica's highlighting parser problem: the x in postfix notation is highlighted with blue color, not turquoise;
everytime when I look into Mathematica manuals, I see examples like this one: Do[x[[i]] = (v[[i]] - U[[i, i + 1 ;; n]].x[[i + 1 ;; n]])/ U[[i, i]], {i, n, 1, -1}];, which means... hell, they think it's easy to read/support/etc.?!
So these two things made me ask this question here: am I so bad boy, that use my Ruby-style, and should I write code like these guys do, or is it OK, and I don't have to worry, and should write as I like to?
The style you propose is frequently possible, but is inadvisable in the case of Do. The problem is that Do has the attribute HoldAll. This is important because the loop variable (x in the example) must remain unevaluated and be treated as a local variable. To see this, try evaluating these expressions:
x = 123;
Do[Print[x], {x, 1, 2}]
(* prints 1 and 2 *)
{x, 1, 2} // Do[Print[x], #]&
(* error: Do::itraw: Raw object 123 cannot be used as an iterator.
Do[Print[x], {123, 1, 2}]
*)
The error occurs because the pure function Do[Print[x], #]& lacks the HoldAll attribute, causing {x, 1, 2} to be evaluated. You could solve the problem by explicitly defining a pure function with the HoldAll attribute, thus:
{x, 1, 2} // Function[Null, Do[Print[x], #], HoldAll]
... but I suspect that the cure is worse than the disease :)
Thus, when one is using "binding" expressions like Do, Table, Module and so on, it is safest to conform with the herd.
I think you need to learn to use the styles that Mathematica most naturally supports. Certainly there is more than one way, and my code does not look like everyone else's. Nevertheless, if you continue to try to beat Mathematica syntax into your own preconceived style, based on a different language, I foresee nothing but continued frustration for you.
Whitespace is not evil, and you can easily add line breaks to separate long arguments:
Do[
x[[i]] = (v[[i]] - U[[i, i + 1 ;; n]].x[[i + 1 ;; n]]) / U[[i, i]]
, {i, n, 1, -1}
];
This said, I like to write using more prefix (f # x) and infix (x ~ f ~ y) notation that I usually see, and I find this valuable because it is easy to determine that such functions are receiving one and two arguments respectively. This is somewhat nonstandard, but I do not think it is kicking over the traces of Mathematica syntax. Rather, I see it as using the syntax to advantage. Sometimes this causes syntax highlighting to fail, but I can live with that:
f[x] ~Do~ {x, 2, 5}
When using anything besides the standard form of f[x, y, z] (with line breaks as needed), you must be more careful of evaluation order, and IMHO, readability can suffer. Consider this contrived example:
{x, y} // # + 1 & ## # &
I do not find this intuitive. Yes, for someone intimate with Mathematica's order of operations, it is readable, but I believe it does not improve clarity. I tend to reserve // postfix for named functions where reading is natural:
Do[f[x], {x, 10000}] //Timing //First
I'd say it is one of the biggest mistakes to try program in a language B in ways idiomatic for a language A, only because you happen to know the latter well and like it. There is nothing wrong in borrowing idioms, but you have to make sure to understand the second language well enough so that you know why other people use it the way they do.
In the particular case of your example, and generally, I want to draw attention to a few things others did not mention. First, Do is a scoping construct which uses dynamic scoping to localize its iterator symbols. Therefore, you have:
In[4]:=
x=1;
{x,1,5}//Do[f[x],#]&
During evaluation of In[4]:= Do::itraw: Raw object
1 cannot be used as an iterator. >>
Out[5]= Do[f[x],{1,1,5}]
What a surprise, isn't it. This won't happen when you use Do in a standard fashion.
Second, note that, while this fact is largely ignored, f[#]&[arg] is NOT always the same as f[arg]. Example:
ClearAll[f];
SetAttributes[f, HoldAll];
f[x_] := Print[Unevaluated[x]]
f[5^2]
5^2
f[#] &[5^2]
25
This does not affect your example, but your usage is close enough to those cases affected by this, since you manipulate the scopes.
Mathematica supports 4 ways of applying a function to its arguments:
standard function form: f[x]
prefix: f#x or g##{x,y}
postfix: x // f, and
infix: x~g~y which is equivalent to g[x,y].
What form you choose to use is up to you, and is often an aesthetic choice, more than anything else. Internally, f#x is interpreted as f[x]. Personally, I primarily use postfix, like you, because I view each function in the chain as a transformation, and it is easier to string multiple transformations together like that. That said, my code will be littered with both the standard form and prefix form mostly depending on whim, but I tend to use standard form more as it evokes a feeling of containment with regards to the functions parameters.
I took a little liberty with the prefix form, as I included the shorthand form of Apply (##) alongside Prefix (#). Of the built in commands, only the standard form, infix form, and Apply allow you easily pass more than one variable to your function without additional work. Apply (e.g. g ## {x,y}) works by replacing the Head of the expression ({x,y}) with the function, in effect evaluating the function with multiple variables (g##{x,y} == g[x,y]).
The method I use to pass multiple variables to my functions using the postfix form is via lists. This necessitates a little more work as I have to write
{x,y} // f[ #[[1]], #[[2]] ]&
to specify which element of the List corresponds to the appropriate parameter. I tend to do this, but you could combine this with Apply like
{x,y} // f ## #&
which involves less typing, but could be more difficult to interpret when you read it later.
Edit: I should point out that f and g above are just placeholders, they can, and often are, replaced with pure functions, e.g. #+1& # x is mostly equivalent to #+1&[x], see Leonid's answer.
To clarify, per Leonid's answer, the equivalence between f#expr and f[expr] is true if f does not posses an attribute that would prevent the expression, expr, from being evaluated before being passed to f. For instance, one of the Attributes of Do is HoldAll which allows it to act as a scoping construct which allows its parameters to be evaluated internally without undo outside influence. The point is expr will be evaluated prior to it being passed to f, so if you need it to remain unevaluated, extra care must be taken, like creating a pure function with a Hold style attribute.
You can certainly do it, as you evidently know. Personally, I would not worry about how the manuals write code, and just write it the way I find natural and memorable.
However, I have noticed that I usually fall into definite patterns. For instance, if I produce a list after some computation and incidentally plot it to make sure it's what I expected, I usually do
prodListAfterLongComputation[
args,
]//ListPlot[#,PlotRange->Full]&
If I have a list, say lst, and I am now focusing on producing a complicated plot, I'll do
ListPlot[
lst,
Option1->Setting1,
Option2->Setting2
]
So basically, anything that is incidental and perhaps not important to be readable (I don't need to be able to instantaneously parse the first ListPlot as it's not the point of that bit of code) ends up being postfix, to avoid disrupting the already-written complicated code it is applied to. Conversely, complicated code I tend to write in the way I find easiest to parse later, which, in my case, is something like
f[
g[
a,
b,
c
]
]
even though it takes more typing and, if one does not use the Workbench/Eclipse plugin, makes it more work to reorganize code.
So I suppose I'd answer your question with "do whatever is most convenient after taking into account the possible need for readability and the possible loss of convenience such as code highlighting, extra work to refactor code etc".
Of course all this applies if you're the only one working with some piece of code; if there are others, it is a different question alltogether.
But this is just an opinion. I doubt it's possible for anybody to offer more than this.
For one-argument functions (f#(arg)), ((arg)//f) and f[arg] are completely equivalent even in the sense of applying of attributes of f. In the case of multi-argument functions one may write f#Sequence[args] or Sequence[args]//f with the same effect:
In[1]:= SetAttributes[f,HoldAll];
In[2]:= arg1:=Print[];
In[3]:= f#arg1
Out[3]= f[arg1]
In[4]:= f#Sequence[arg1,arg1]
Out[4]= f[arg1,arg1]
So it seems that the solution for anyone who likes postfix notation is to use Sequence:
x=123;
Sequence[Print[x],{x,1,2}]//Do
(* prints 1 and 2 *)
Some difficulties can potentially appear with functions having attribute SequenceHold or HoldAllComplete:
In[18]:= Select[{#, ToExpression[#, InputForm, Attributes]} & /#
Names["System`*"],
MemberQ[#[[2]], SequenceHold | HoldAllComplete] &][[All, 1]]
Out[18]= {"AbsoluteTiming", "DebugTag", "EvaluationObject", \
"HoldComplete", "InterpretationBox", "MakeBoxes", "ParallelEvaluate", \
"ParallelSubmit", "Parenthesize", "PreemptProtect", "Rule", \
"RuleDelayed", "Set", "SetDelayed", "SystemException", "TagSet", \
"TagSetDelayed", "Timing", "Unevaluated", "UpSet", "UpSetDelayed"}

Flora-2 diamond inheritance

Flora-2 is an eccentric language and I know this is a long shot but I haven't found any active resources devoted to it so I'm trying here. Its so popular... there is no stackoverflow tag for it. If you know anything about the status and future of Flora-2 and XSB Prolog, I'd love to hear that, too.
Can someone explain Flora-2 diamond-inheritance rules to me? The manual has an example but doesn't show the results of the example. The wording seems to be the opposite of what I see in the interpreter and in the diamond.flr demo. Here's the demo:
c[f*->g].
c1[f(a)*->a]::c.
c2[f(b)*->b]::c.
o:c1.
o:c2.
?- ?X[?Y->?Z].
(What I see happens with or without the base class c)
The manual says:
At the level of methods of arity > 1, a conflict is considered to have taken place if there are two non-overwritten definitions of the same method attached to two different superclasses. When deciding whether a conflict has taken place we disregard the arguments of the method. For instance, in
a:c. c[m(k)*->f]. a:d. d[m(u)*->f].
a multiple inheritance conflict has taken place even though in one case the method m is applied to object k, while in the other it is applied to object u.
(I'm pretty sure they mean arity >= 1 but the results are similar for arity 2 as well)
So I take that to mean that the inheritance of f has a conflict so its undefined (although I'm a bit confused about what 'undefined' means, in a related section it says "inheritance does not take place"). Here's what I get when I run the diamond:
?X = o
?Y = f
?Z = g
?X = o
?Y = f(a)
?Z = a
I expected only the first solution, although I'd think that the second solution would at least make some sense if it also had the solution
?X = o
?Y = f(b)
?Z = b
... but it didn't.
FYI, I'm using the latest stable XSB and the latest Flora-2 release... 0.95.
Stumbled upon this 2+ years after the question was asked. You should have asked it on the flora-users mailing list.
Anyway, this seems to have been a bug in that version of Flora-2. I see that the current version gives a correct answer
?X = o
?Y = f
?Z = g
That is, the two conflicting inheritances canceled each other out, as the manual describes.
I'm not familiar with Flora-2 syntax but I have a smilar example of the well-known diamond-inheritance problem in Logtalk. You can find it here:
https://github.com/LogtalkDotOrg/logtalk3/tree/master/examples/diamonds
See the NOTES.txt and source file comments for information on semantics, default inheritance rules, and user-overriding of default inheritance rules. You can run the example using the latest CVS version of XSB. See the SCRIPT.txt file for sample queries.

Resources