Overloading Set[a, b] (a = b) - wolfram-mathematica

I would like to overload Mathematica's Set function (=), which turns out to be too tricky for me (see following code example). I successfully overloaded other functions (e.g. Reverse in the code example). Any suggestions?
In[17]:= ClearAll[struct];
In[18]:= var1=struct[{1,2}]
Out[18]= struct[{1,2}]
In[19]:= Reverse#var1
Out[19]= struct[{1,2}]
In[20]:= Head[var1]
Out[20]= struct
In[21]:= struct/:Reverse[stuff_struct]:=struct[Reverse#stuff[[1]]]
In[22]:= Reverse#var1
Out[22]= struct[{2,1}]
In[23]:= struct/:Set[stuff_struct,rhs_]:=Set[struct[[1]],rhs]
In[24]:= var1="Success!"
Out[24]= Success!
In[25]:= var1
Out[25]= Success!
In[26]:= Head[var1]
Out[26]= String
In[27]:= ??struct
Global`struct
Reverse[stuff_struct]^:=struct[Reverse[stuff[[1]]]]
(stuff_struct=rhs_)^:=struct[[1]]=rhs

I don't think that what you want can be done with UpValues (alas), since the symbol (tag) must be not deeper than level one for definition to work. Also, the semantics you want is somewhat unusual in Mathematica, since most Mathematica expressions are immutable (not L-values), and their parts can not be assigned values. I believe that this code will do something similar to what you want:
Unprotect[Set];
Set[var_Symbol, rhs_] /;
MatchQ[Hold[var] /. OwnValues[var], Hold[_struct]] := Set[var[[1]], rhs];
Protect[Set];
For example:
In[33]:= var1 = struct[{1, 2}]
Out[33]= struct[{1, 2}]
In[34]:= var1 = "Success!"
Out[34]= "Success!"
In[35]:= var1
Out[35]= struct["Success!"]
But generally, adding DownValues to such important commands as Set is not recommended since this may corrupt the system in subtle ways.
EDIT
Expanding a bit on why your attempt failed: Mathematica implements flow control and assignment operators using the mechanism of argument holding (Hold* - attributes, described here). This mechanism allows it to, in particular, imitate pass-by-reference semantics needed for assignments. But then, at the moment when you assign to var1, Set does not know what is stored in var1 already, since it only has the symbol var1, not its value. The pattern _struct does not match because, even if the variable already stores some struct, Set only has the variable name. For the match to be successful, the variable inside Set would have to evaluate to its value. But then, the value is immutable and you can not assign to it. The code I suggested tests whether the variable has an assigned value that is of the form struct[something], and if so, modifies the first part (the Part command is an exception, it can modify parts of an L-value expression provided that those parts already exist).
You can read more on the topics of Hold* - attributes and related issues in many places, for example here and here

I also do not believe that this can be done with TagSet, because the first argument of Set must be held.
It seems to me that if modifying Set, it can be done with:
Unprotect[Set]
Set[s_, x_] /; Head[s] === struct := s[[1]] = x
However, Leonid knows Mathematica better than I, and he probably has a good reason for the longer definition.

Related

Multiple statements in mathematica function

I wanted to know, how to evaluate multiple statements in a function in Mathematica.
E.g.
f[x_]:=x=x+5 and then return x^2
I know this much can be modified as (x+5)^2 but originally I wanted to read data from the file in the function and print the result after doing some data manipulation.
If you want to group several commands and output the last use the semicolon (;) between them, like
f[y_]:=(x=y+5;x^2)
Just don't use a ; for the last statement.
If your set of commands grows bigger you might want to use scoping structures like Module or Block.
You are looking for CompoundExpression (short form ;):
f[x_]:= (thing = x+5 ; thing^2)
The parentheses are necessary due to the very low precedence of ;.
As Szabolcs called me on, you cannot write:
f[x_]:= (x = x+5 ; x^2)
See this answer for an explanation and alternatives.
Leonid, who you should listen to, says that thing should be localized. I didn't do this above because I wanted to emphasize CompoundExpression as a specific fit for your "and then" construct. As it is written, this will affect the global value of thing which may or may not be what you actually want to do. If it is not, see both the answer linked above, and also:
Mathematica Module versus With or Block - Guideline, rule of thumb for usage?
Several people have mentioned already that you can use CompoundExpression:
f[x_] := (y=x+5; y^2)
However, if you use the same variable x in the expression as in the argument,
f[x_] := (x=x+5; x^2)
then you'll get errors when evaluating the function with a number. This is because := essentially defines a replacement of the pattern variables from the lhs, i.e. f[1] evaluates to the (incorrect) (1 = 1+5; 1^2).
So, as Sjoerd said, use Module (or Block sometimes, but this one has caveats!) to localize a function-variable:
f[x_] := Module[{y}, y=x+5; y^2]
Finally, if you need a function that modified its arguments, then you can set the attribute HoldAll:
Clear[addFive]
SetAttributes[addFive, HoldAll]
addFive[x_] := (x=x+5)
Then use it as
a = 3;
addFive[a]
a

May I write {x,a,b}//Do[...,#]& instead of Do[...,{x,a,b}]?

I'm in love with Ruby. In this language all core functions are actually methods. That's why I prefer postfix notation – when the data, which I want to process is placed left from the body of anonymous processing function, for example: array.map{...}. I believe, that it has advantages in how easy is this code to read.
But Mathetica, being functional (yeah, it can be procedural if you want) dictates a style, where Function name is placed left from the data. As we can see in its manuals, // is used only when it's some simple Function, without arguments, like list // MatrixForm. When Function needs a lot of arguments, people who wrote manuals, use syntax F[data].
It would be okay, but my problem is the case F[f,data], for example Do[function, {x, a, b}]. Most of Mathematica functions (if not all) have arguments in exactly this order – [function, data], not [data, function]. As I prefer to use pure functions to keep namespace clean instead of creating a lot of named functions in my notebook, the argument function can be too big – so big, that argument data would be placed on the 5-20th line of code after the line with Function call.
This is why sometimes, when evil Ruby nature takes me under control, I rewrite such functions in postfix way:
Because it's important for me, that pure function (potentially big code) is placed right from processing data. Yeah I do it and I'm happy. But there are two things:
this causes Mathematica's highlighting parser problem: the x in postfix notation is highlighted with blue color, not turquoise;
everytime when I look into Mathematica manuals, I see examples like this one: Do[x[[i]] = (v[[i]] - U[[i, i + 1 ;; n]].x[[i + 1 ;; n]])/ U[[i, i]], {i, n, 1, -1}];, which means... hell, they think it's easy to read/support/etc.?!
So these two things made me ask this question here: am I so bad boy, that use my Ruby-style, and should I write code like these guys do, or is it OK, and I don't have to worry, and should write as I like to?
The style you propose is frequently possible, but is inadvisable in the case of Do. The problem is that Do has the attribute HoldAll. This is important because the loop variable (x in the example) must remain unevaluated and be treated as a local variable. To see this, try evaluating these expressions:
x = 123;
Do[Print[x], {x, 1, 2}]
(* prints 1 and 2 *)
{x, 1, 2} // Do[Print[x], #]&
(* error: Do::itraw: Raw object 123 cannot be used as an iterator.
Do[Print[x], {123, 1, 2}]
*)
The error occurs because the pure function Do[Print[x], #]& lacks the HoldAll attribute, causing {x, 1, 2} to be evaluated. You could solve the problem by explicitly defining a pure function with the HoldAll attribute, thus:
{x, 1, 2} // Function[Null, Do[Print[x], #], HoldAll]
... but I suspect that the cure is worse than the disease :)
Thus, when one is using "binding" expressions like Do, Table, Module and so on, it is safest to conform with the herd.
I think you need to learn to use the styles that Mathematica most naturally supports. Certainly there is more than one way, and my code does not look like everyone else's. Nevertheless, if you continue to try to beat Mathematica syntax into your own preconceived style, based on a different language, I foresee nothing but continued frustration for you.
Whitespace is not evil, and you can easily add line breaks to separate long arguments:
Do[
x[[i]] = (v[[i]] - U[[i, i + 1 ;; n]].x[[i + 1 ;; n]]) / U[[i, i]]
, {i, n, 1, -1}
];
This said, I like to write using more prefix (f # x) and infix (x ~ f ~ y) notation that I usually see, and I find this valuable because it is easy to determine that such functions are receiving one and two arguments respectively. This is somewhat nonstandard, but I do not think it is kicking over the traces of Mathematica syntax. Rather, I see it as using the syntax to advantage. Sometimes this causes syntax highlighting to fail, but I can live with that:
f[x] ~Do~ {x, 2, 5}
When using anything besides the standard form of f[x, y, z] (with line breaks as needed), you must be more careful of evaluation order, and IMHO, readability can suffer. Consider this contrived example:
{x, y} // # + 1 & ## # &
I do not find this intuitive. Yes, for someone intimate with Mathematica's order of operations, it is readable, but I believe it does not improve clarity. I tend to reserve // postfix for named functions where reading is natural:
Do[f[x], {x, 10000}] //Timing //First
I'd say it is one of the biggest mistakes to try program in a language B in ways idiomatic for a language A, only because you happen to know the latter well and like it. There is nothing wrong in borrowing idioms, but you have to make sure to understand the second language well enough so that you know why other people use it the way they do.
In the particular case of your example, and generally, I want to draw attention to a few things others did not mention. First, Do is a scoping construct which uses dynamic scoping to localize its iterator symbols. Therefore, you have:
In[4]:=
x=1;
{x,1,5}//Do[f[x],#]&
During evaluation of In[4]:= Do::itraw: Raw object
1 cannot be used as an iterator. >>
Out[5]= Do[f[x],{1,1,5}]
What a surprise, isn't it. This won't happen when you use Do in a standard fashion.
Second, note that, while this fact is largely ignored, f[#]&[arg] is NOT always the same as f[arg]. Example:
ClearAll[f];
SetAttributes[f, HoldAll];
f[x_] := Print[Unevaluated[x]]
f[5^2]
5^2
f[#] &[5^2]
25
This does not affect your example, but your usage is close enough to those cases affected by this, since you manipulate the scopes.
Mathematica supports 4 ways of applying a function to its arguments:
standard function form: f[x]
prefix: f#x or g##{x,y}
postfix: x // f, and
infix: x~g~y which is equivalent to g[x,y].
What form you choose to use is up to you, and is often an aesthetic choice, more than anything else. Internally, f#x is interpreted as f[x]. Personally, I primarily use postfix, like you, because I view each function in the chain as a transformation, and it is easier to string multiple transformations together like that. That said, my code will be littered with both the standard form and prefix form mostly depending on whim, but I tend to use standard form more as it evokes a feeling of containment with regards to the functions parameters.
I took a little liberty with the prefix form, as I included the shorthand form of Apply (##) alongside Prefix (#). Of the built in commands, only the standard form, infix form, and Apply allow you easily pass more than one variable to your function without additional work. Apply (e.g. g ## {x,y}) works by replacing the Head of the expression ({x,y}) with the function, in effect evaluating the function with multiple variables (g##{x,y} == g[x,y]).
The method I use to pass multiple variables to my functions using the postfix form is via lists. This necessitates a little more work as I have to write
{x,y} // f[ #[[1]], #[[2]] ]&
to specify which element of the List corresponds to the appropriate parameter. I tend to do this, but you could combine this with Apply like
{x,y} // f ## #&
which involves less typing, but could be more difficult to interpret when you read it later.
Edit: I should point out that f and g above are just placeholders, they can, and often are, replaced with pure functions, e.g. #+1& # x is mostly equivalent to #+1&[x], see Leonid's answer.
To clarify, per Leonid's answer, the equivalence between f#expr and f[expr] is true if f does not posses an attribute that would prevent the expression, expr, from being evaluated before being passed to f. For instance, one of the Attributes of Do is HoldAll which allows it to act as a scoping construct which allows its parameters to be evaluated internally without undo outside influence. The point is expr will be evaluated prior to it being passed to f, so if you need it to remain unevaluated, extra care must be taken, like creating a pure function with a Hold style attribute.
You can certainly do it, as you evidently know. Personally, I would not worry about how the manuals write code, and just write it the way I find natural and memorable.
However, I have noticed that I usually fall into definite patterns. For instance, if I produce a list after some computation and incidentally plot it to make sure it's what I expected, I usually do
prodListAfterLongComputation[
args,
]//ListPlot[#,PlotRange->Full]&
If I have a list, say lst, and I am now focusing on producing a complicated plot, I'll do
ListPlot[
lst,
Option1->Setting1,
Option2->Setting2
]
So basically, anything that is incidental and perhaps not important to be readable (I don't need to be able to instantaneously parse the first ListPlot as it's not the point of that bit of code) ends up being postfix, to avoid disrupting the already-written complicated code it is applied to. Conversely, complicated code I tend to write in the way I find easiest to parse later, which, in my case, is something like
f[
g[
a,
b,
c
]
]
even though it takes more typing and, if one does not use the Workbench/Eclipse plugin, makes it more work to reorganize code.
So I suppose I'd answer your question with "do whatever is most convenient after taking into account the possible need for readability and the possible loss of convenience such as code highlighting, extra work to refactor code etc".
Of course all this applies if you're the only one working with some piece of code; if there are others, it is a different question alltogether.
But this is just an opinion. I doubt it's possible for anybody to offer more than this.
For one-argument functions (f#(arg)), ((arg)//f) and f[arg] are completely equivalent even in the sense of applying of attributes of f. In the case of multi-argument functions one may write f#Sequence[args] or Sequence[args]//f with the same effect:
In[1]:= SetAttributes[f,HoldAll];
In[2]:= arg1:=Print[];
In[3]:= f#arg1
Out[3]= f[arg1]
In[4]:= f#Sequence[arg1,arg1]
Out[4]= f[arg1,arg1]
So it seems that the solution for anyone who likes postfix notation is to use Sequence:
x=123;
Sequence[Print[x],{x,1,2}]//Do
(* prints 1 and 2 *)
Some difficulties can potentially appear with functions having attribute SequenceHold or HoldAllComplete:
In[18]:= Select[{#, ToExpression[#, InputForm, Attributes]} & /#
Names["System`*"],
MemberQ[#[[2]], SequenceHold | HoldAllComplete] &][[All, 1]]
Out[18]= {"AbsoluteTiming", "DebugTag", "EvaluationObject", \
"HoldComplete", "InterpretationBox", "MakeBoxes", "ParallelEvaluate", \
"ParallelSubmit", "Parenthesize", "PreemptProtect", "Rule", \
"RuleDelayed", "Set", "SetDelayed", "SystemException", "TagSet", \
"TagSetDelayed", "Timing", "Unevaluated", "UpSet", "UpSetDelayed"}

Mathematica Module versus With or Block - Guideline, rule of thumb for usage?

Leonid wrote in chapter iv of his book : "... Module, Block and With. These constructs are explained in detail in Mathematica Book and Mathematica Help, so I will say just a few words about them here. ..."
From what I have read ( been able to find ) I am still in the dark. For packaged functions I ( simply ) use Module, because it works and I know the construct. It may not be the best choice though. It is not entirely clear to me ( from the documentation ) when, where or why to use With ( or Block ).
Question. Is there a rule of thumb / guideline on when to use Module, With or Block ( for functions in packages )? Are there limitations compared to Module? The docs say that With is faster. I want to be able to defend my =choice= for Module ( or another construct ).
A more practical difference between Block and Module can be seen here:
Module[{x}, x]
Block[{x}, x]
(*
-> x$1979
x
*)
So if you wish to return eg x, you can use Block. For instance,
Plot[D[Sin[x], x], {x, 0, 10}]
does not work; to make it work, one could use
Plot[Block[{x}, D[Sin[x], x]], {x, 0, 10}]
(of course this is not ideal, it is simply an example).
Another use is something like Block[{$RecursionLimit = 1000},...], which temporarily changes $RecursionLimit (Module would not have worked as it renames $RecursionLimit).
One can also use Block to block evaluation of something, eg
Block[{Sin}, Sin[.5]] // Trace
(*
-> {Block[{Sin},Sin[0.5]],Sin[0.5],0.479426}
*)
ie, it returns Sin[0.5] which is only evaluated after the Block has finished executing. This is because Sin inside the Block is just a symbol, rather than the sine function. You could even do something like
Block[{Sin = Cos[#/4] &}, Sin[Pi]]
(*
-> 1/Sqrt[2]
*)
(use Trace to see how it works). So you can use Block to locally redefine built-in functions, too:
Block[{Plus = Times}, 3 + 2]
(*
-> 6
*)
As you mentioned there are many things to consider and a detailed discussion is possible. But here are some rules of thumb that I apply the majority of the time:
Module[{x}, ...] is the safest and may be needed if either
There are existing definitions for x that you want to avoid breaking during the evaluation of the Module, or
There is existing code that relies on x being undefined (for example code like Integrate[..., x]).
Module is also the only choice for creating and returning a new symbol. In particular, Module is sometimes needed in advanced Dynamic programming for this reason.
If you are confident there aren't important existing definitions for x or any code relying on it being undefined, then Block[{x}, ...] is often faster. (Note that, in a project entirely coded by you, being confident of these conditions is a reasonable "encapsulation" standard that you may wish to enforce anyway, and so Block is often a sound choice in these situations.)
With[{x = ...}, expr] is the only scoping construct that injects the value of x inside Hold[...]. This is useful and important. With can be either faster or slower than Block depending on expr and the particular evaluation path that is taken. With is less flexible, however, since you can't change the definition of x inside expr.
Andrew has already provided a very comprehensive answer. I would just summarize by noting that Module is for defining local variables that can be redefined within the scope of a function definition, while With is for defining local constants, which can't be. You also can't define a local constant based on the definition of another local constant you have set up in the same With statement, or have multiple symbols on the LHS of a definition. That is, the following does not work.
With[{{a,b}= OptionValue /# {opt1,opt2} }, ...]
I tend to set up complicated function definitions with Module enclosing a With. I set up all the local constants I can first inside the With, e.g. the Length of the data passed to the function, if I need that, then other local variables as needed. The reason is that With is a little faster of you genuinely do have constants not variables.
I'd like to mention the official documentation on the difference between Block and Module is available at http://reference.wolfram.com/mathematica/tutorial/BlocksComparedWithModules.html.

Question on Condition (/;)

Condition has attribute HoldAll which prevents evaluation of its first argument before applying the Condition. But for some reason Condition evaluates its first argument even if the test gives False:
In[1]:= Condition[Print[x],False]
During evaluation of In[1]:= x
Out[1]= Null/;False
Why is this? For what purposes Condition evaluates its first argument if the test gives False? In which cases this behavior can be useful?
P.S. Its behavior differs when the Condition is used as the second argument of SetDelayed:
In[5]:= f:=Condition[Print[x],False]; f
Out[6]= f
This is what I expected for the all cases.
As far as I can tell (and this has been mentioned by other answerers already), Condition should not be thought of as a standalone function, but as a wrapper used in forming larger expressions involving patterns. But I want to stress that part of the subtlety here comes from the fact that Rule and RuleDelayed are scoping constructs. In general, scoping constructs must have a variable-binding stage, where they resolve possible conflicts in variable names and actually bind variables to their occurrences in the body of the scoping construct (or, in the r.h.s. of the rule for Rule and RuleDelayed). This may be considered a part of the inner workings of the scoping constructs, but, because Mathematica allows top-level manipulations through attributes and things like Evaluate, scoping constructs are not as black-box as they may seem - we may change the bindings by forcing the variable declarations, or the body, or both, to evaluate before the binding happens - for example, by removing some of the Hold* - attributes. I discussed these things here in somewhat more detail, although, not knowing the exact implementation details for the scoping constructs, I had to mostly guess.
Returning back to the case of Rule, RuleDelayed and Condition, it is instructive to Trace one of the examples discussed:
In[28]:= Trace[Cases[{3,3.},a_:>Print[a]/;(Print["!"];IntegerQ[a])],RuleCondition,TraceAbove->All]
During evaluation of In[28]:= !
During evaluation of In[28]:= !
During evaluation of In[28]:= 3
Out[28]= {Cases[{3,3.},a_:>Print[a]/;(Print[!];IntegerQ[a])],
{RuleCondition[$ConditionHold[$ConditionHold[Print[3]]],True],
$ConditionHold[$ConditionHold[Print[3]]]},
{RuleCondition[$ConditionHold[$ConditionHold[Print[3.]]],False],Fail},
{Print[3]},{Null}}
What you see is that there are special internal heads RuleCondition and $ConditionHold, which appear when Condition is used with Rule or RuleDelayed. My guess is that these implement the mechanism to incorporate conditions on pattern variables, including the variable binding. When you use Condition as a standalone function, these don't appear. These heads are crucial for condition mechanism to really work.
You can look at how they work in Rule and RuleDelayed:
In[31]:= RuleCondition[$ConditionHold[$ConditionHold[Print[3.`]]],True]
Out[31]= $ConditionHold[$ConditionHold[Print[3.]]]
In[32]:= RuleCondition[$ConditionHold[$ConditionHold[Print[3.`]]],False]
Out[32]= Fail
You can see that, say, Cases picks up only elements of the form $ConditionHold[$ConditionHold[something]], and ignore those where RuleCondition results in Fail. Now, what happens when you use Condition as a stand-alone function is different - thus the difference in results.
One good example I am aware of, which illustrates the above points very well, is in this thread, where possible implementations of a version of With which binds sequentially, are discussed. I will repeat a part of that discussion here, since it is instructive. The idea was to make a version of With, where previous declarations can be used for declarations further down the declaration list. If we call it Let, then, for example, for code like
Clear[h, xl, yl];
xl = 1;
yl = 2;
h[x_, y_] := Let[{xl = x, yl = y + xl + 1}, xl^2 + yl^2];
h[a, b]
we should get
a^2+(1+a+b)^2
One of the implementations which was suggested, and gives this result, is:
ClearAll[Let];
SetAttributes[Let, HoldAll];
Let /: (lhs_ := Let[vars_, expr_ /; cond_]) :=
Let[vars, lhs := expr /; cond]
Let[{}, expr_] := expr;
Let[{head_}, expr_] := With[{head}, expr]
Let[{head_, tail__}, expr_] := With[{head}, Let[{tail}, expr]]
(this is due to Bastian Erdnuess). What happens here is that this Let performs bindings at run-time, rather than at the time when function is being defined. And as soon as we want to use shared local variables, it fails:
Clear[f];
f[x_,y_]:=Let[{xl=x,yl=y+xl+1},xl^2+yl^2/;(xl+yl<15)];
f[x_,y_]:=x+y;
?f
Global`f
f[x_,y_]:=x+y
Had it worked correctly, and we should have ended up with 2 distinct definitions. And here we come to the crux of the matter: since this Let acts at run-time, SetDelayed does not perceive the Condition as a part of the pattern - it would do that for With, Block, Module, but not some unknown Let. So, both definitions look for Mathematica the same (in terms of patterns), and therefore, the second replaces the first. But this is not all. Now we only create the first definition, and try to execute:
Clear[f];
f[x_, y_] := Let[{xl = x, yl = y + xl + 1}, xl^2 + yl^2 /; (xl + yl < 15)];
In[121]:= f[3, 4]
Out[121]= 73 /; 3 + 8 < 15
If you trace the last execution, it would be very unclear why the Condition did not fire here. The reason is that we messed up the binding stage. Here is my improved version, which is free from these flaws:
ClearAll[LetL];
SetAttributes[LetL, HoldAll];
LetL /: Verbatim[SetDelayed][lhs_, rhs : HoldPattern[LetL[{__}, _]]] :=
Block[{With}, Attributes[With] = {HoldAll};
lhs := Evaluate[rhs]];
LetL[{}, expr_] := expr;
LetL[{head_}, expr_] := With[{head}, expr];
LetL[{head_, tail__}, expr_] :=
Block[{With}, Attributes[With] = {HoldAll};
With[{head}, Evaluate[LetL[{tail}, expr]]]];
What is does is that it expands LetL into nested With at definition-time, not run-time, and that happens before the binding stage. Now, let us see:
In[122]:=
Clear[ff];
ff[x_,y_]:=LetL[{xl=x,yl=y+xl+1},xl^2+yl^2/;(xl+yl<15)];
Trace[ff[3,4]]
Out[124]= {ff[3,4],
{With[{xl$=3},With[{yl$=4+xl$+1},RuleCondition[$ConditionHold[$ConditionHold[xl$^2+yl$^2]],
xl$+yl$<15]]],With[{yl$=4+3+1},RuleCondition[$ConditionHold[$ConditionHold[3^2+yl$^2]],3+yl$<15]],
{4+3+1,8},RuleCondition[$ConditionHold[$ConditionHold[3^2+8^2]],3+8<15],
{{3+8,11},11<15,True},RuleCondition[$ConditionHold[$ConditionHold[3^2+8^2]],True],
$ConditionHold[$ConditionHold[3^2+8^2]]},3^2+8^2,{3^2,9},{8^2,64},9+64,73}
This works fine, and you can see the heads RuleCondition and $ConditionHold showing up all right. It is instructive to look at the resulting definition for ff:
?ff
Global`ff
ff[x_,y_]:=With[{xl=x},With[{yl=y+xl+1},xl^2+yl^2/;xl+yl<15]]
You can see that LetL has expanded at definition-time, as advertised. And since pattern variable binding happened after that, things work fine. Also, if we add another definition:
ff[x_,y_]:=x+y;
?ff
Global`ff
ff[x_,y_]:=With[{xl=x},With[{yl=y+xl+1},xl^2+yl^2/;xl+yl<15]]
ff[x_,y_]:=x+y
We see that the patterns are now perceived as different by Mathematica.
The final question was why Unevaluated does not restore the behavior of RuleDelayed broken by the removal of its HoldRest attribute. I can only guess that this is related to the unusual behavior of RuleDelayed (it eats up any number of Unevaluated wrappers around the r.h.s.), noted in the comments to this question.
To summarize: one of the most frequent intended uses of Condition is closely tied to the enclosing scoping constructs (Rule and RuleDelayed), and one should take into account the variable binding stage in scoping constructs, when analyzing their behavior.
Condition use often depends on what is in the left hand side, so it must evaluate the LHS at least to some degree. Consider:
MatchQ[3, a_ /; IntegerQ[a]]
True
p = {a_, b_};
MatchQ[{3, 0.2}, p /; IntegerQ[a] && b < 1]
True
Both for this and from this, I would have guessed that Condition had attribute HoldRest rather than HoldAll. It probably needs HoldAll for some internal use, perhaps related to the SetDelayed usage.

Fixing Combinatorica redefinition of Element

My code relies on version of Element which works like MemberQ, but when I load Combinatorica, Element gets redefined to work like Part. What is the easiest way to fix this conflict? Specifically, what is the syntax to remove Combinatorica's definition from DownValues? Here's what I get for DownValues[Element]
{HoldPattern[
Combinatorica`Private`a_List \[Element] \
{Combinatorica`Private`index___}] :>
Combinatorica`Private`a[[Combinatorica`Private`index]],
HoldPattern[Private`x_ \[Element] Private`list_List] :>
MemberQ[Private`list, Private`x]}
If your goal is to prevent Combinatorica from installing the definition in the first place, you can achieve this result by loading the package for the first time thus:
Block[{Element}, Needs["Combinatorica`"]]
However, this will almost certainly make any Combinatorica features that depend upon the definition fail (which may or may not be of concern in your particular application).
You can do several things. Let us introduce a convenience function
ClearAll[redef];
SetAttributes[redef, HoldRest];
redef[f_, code_] := (Unprotect[f]; code; Protect[f])
If you are sure about the order of definitions, you can do something like
redef[Element, DownValues[Element] = Rest[DownValues[Element]]]
If you want to delete definitions based on the context, you can do something like this:
redef[Element, DownValues[Element] =
DeleteCases[DownValues[Element],
rule_ /; Cases[rule, x_Symbol /; (StringSplit[Context[x], "`"][[1]] ===
"Combinatorica"), Infinity, Heads -> True] =!= {}]]
You can also use a softer way - reorder definitions rather than delete:
redef[Element, DownValues[Element] = RotateRight[DownValues[Element]]]
There are many other ways of dealing with this problem. Another one (which I already recommended) is to use UpValues, if this is suitable. The last one I want to mention here is to make a kind of custom dynamic scoping construct based on Block, and wrap it around your code. I personally find it the safest variant, in case if you want strictly your definition to apply (because it does not care about the order in which various definitions could have been created - it removes all of them and adds just yours). It is also safer in that outside those places where you want your definitions to apply (by "places" I mean parts of the evaluation stack), other definitions will still apply, so this seems to be the least intrusive way. Here is how it may look:
elementDef[] := Element[x_, list_List] := MemberQ[list, x];
ClearAll[elemExec];
SetAttributes[elemExec, HoldAll];
elemExec[code_] := Block[{Element}, elementDef[]; code];
Example of use:
In[10]:= elemExec[Element[1,{1,2,3}]]
Out[10]= True
Edit:
If you need to automate the use of Block, here is an example package to show one way how this can be done:
BeginPackage["Test`"]
var;
f1;
f2;
Begin["`Private`"];
(* Implementations of your functions *)
var = 1;
f1[x_, y_List] := If[Element[x, y], x^2];
f2[x_, y_List] := If[Element[x, y], x^3];
elementDef[] := Element[x_, list_List] := MemberQ[list, x];
(* The following part of the package is defined at the start and you don't
touch it any more, when adding new functions to the package *)
mainContext = StringReplace[Context[], x__ ~~ "Private`" :> x];
SetAttributes[elemExec, HoldAll];
elemExec[code_] := Block[{Element}, elementDef[]; code];
postprocessDefs[context_String] :=
Map[
ToExpression[#, StandardForm,
Function[sym,DownValues[sym] =
DownValues[sym] /.
Verbatim[RuleDelayed][lhs_,rhs_] :> (lhs :> elemExec[rhs])]] &,
Select[Names[context <> "*"], ToExpression[#, StandardForm, DownValues] =!= {} &]];
postprocessDefs[mainContext];
End[]
EndPackage[]
You can load the package and look at the DownValues for f1 and f2, for example:
In[17]:= DownValues[f1]
Out[17]= {HoldPattern[f1[Test`Private`x_,Test`Private`y_List]]:>
Test`Private`elemExec[If[Test`Private`x\[Element]Test`Private`y,Test`Private`x^2]]}
The same scheme will also work for functions not in the same package. In fact, you could separate
the bottom part (code-processing package) to be a package on its own, import it into any other
package where you want to inject Block into your functions' definitions, and then just call something like postprocessDefs[mainContext], as above. You could make the function which makes definitions inside Block (elementDef here) to be an extra parameter to a generalized version of elemExec, which would make this approach more modular and reusable.
If you want to be more selective about the functions where you want to inject Block, this can also be done in various ways. In fact, the whole Block-injection scheme can be made cleaner then, but it will require slightly more care when implementing each function, while the above approach is completely automatic. I can post the code which will illustrate this, if needed.
One more thing: for the less intrusive nature of this method you pay a price - dynamic scope (Block) is usually harder to control than lexically-scoped constructs. So, you must know exactly the parts of evaluation stack where you want that to apply. For example, I would hesitate to inject Block into a definition of a higher order function, which takes some functions as parameters, since those functions may come from code that assumes other definitions (like for example Combinatorica` functions relying on overloaded Element). This is not a big problem, just requires care.
The bottom line of this seems to be: try to avoid overloading built-ins if at all possible. In this case you faced this definitions clash yourself, but it would be even worse if the one who faces this problem is a user of your package (may be yourself a few months later), who wants to combine your package with another one (which happens to overload same system functions as yours). Of course, it also depends on who will be the users of your package - only yourself or potentially others as well. But in terms of design, and in the long term, you may be better off assuming the latter scenario from the start.
To remove Combinatorica's definition, use Unset or the equivalent form =.. The pattern to unset you can grab from the Information output you show in the question:
Unprotect[Element];
Element[a_List, {index___}] =.
Protect[Element];
The worry would be, of course, that Combinatorica depends internally on this ill-conceived redefinition, but you have reason to believe this to not be the case as the Information output from the redefined Element says:
The use of the function
Element in Combinatorica is now
obsolete, though the function call
Element[a, p] still gives the pth
element of nested list a, where p is a
list of indices.
HTH
I propose an entirely different approach than removing Element from DownValues. Simply use the full name of the Element function.
So, if the original is
System`Element[]
the default is now
Combinatorica`Element[]
because of loading the Combinatorica Package.
Just explicitly use
System`Element[]
wherever you need it. Of course check that System is the correct Context using the Context function:
Context[Element]
This approach ensures several things:
The Combinatorica Package will still work in your notebook, even if the Combinatorica Package is updated in the future
You wont have to redefine the Element function, as some have suggested
You can use the Combinatorica`Element function when needed
The only downside is having to explicitly write it every time.

Resources