I have the following problem:
I define a function:
f[t_]:=(1-Exp[-t])/(1+Exp[-t])
and integrate it by:
g[t_]:=Integrate[f[t],t]
then when I try to plot it using:
Plot[g[t],{t,0,10}]
I get a list of errors of the kind Integrate::ilim: Invalid integration variable or limit(s) in 1.0000204285714285.
I don't understand where the problem is, but I expect it to be in the way I defined g[t], even if when I call it I get a well defined expression, namely -t+2Log[1+e^t] (also, when plotting this expression directly I don't get any problems). So, how can I solve this problem?
I tried by redefining the function as:
g[t_]:=Integrate[f[x],{x,0,t}]
but this way it takes a lot of time to plot (if it even does, after about 10 seconds I interrupted it, it is too slow anyway).
As correctly stated in the comments, changing := (SetDelayed) to = (Set) in the definition for g fixes the problem. The difference between these two definitions is that with Set the right hand side of the definition is evaluated at definition time and the closed form of the integral is assigned as a value of function g:
f[t_] := (1 - Exp[-t])/(1 + Exp[-t])
g[t_] = Integrate[f[x], {x, 0, t}];
Definition[g]
(* => g[t_]=ConditionalExpression[-t-Log[4]+2 Log[1+E^t],E^t>=-1] *)
With SetDelayed the right hand side (Integrate[f[x], {x, 0, t}]) is evaluated at each call of function g which results in very slow evaluation.
I can't get the syntax to do what I want, and I am now not sure if it is even possible.
small review:
One can do this:
{Slider[Dynamic[b], {-2 Pi, 2 Pi}], Dynamic[Sin[b]]}
and now each time the slider moves, 'b' changes, and its Sin[] is automatically printed
But suppose I want to do the computation (Sin[]) directly where the slider is and only show the final result of Sin[], then I can use the second argument of Dynamic like this:
{Slider[Dynamic[b, (b = #; a = Sin[b]; #) &], {-2 Pi, 2 Pi}],
Dynamic[a]}
Now I want to use Manipulate, and do the same thing. I can do the same as the first example above like this:
Manipulate[
Sin[b],
Control[{{b, 0, "b="}, -2 Pi, 2 Pi, ControlType -> Slider}]
]
In the above, Manipulate took care of the 'Dynamic' stuff, and updated Sin[b] whenever 'b' changes.
Now, I want to see if I can do the second case using Manipulate, so I can write:
Manipulate[
a,
Control[{{b, 0, "b="}, -2 Pi, 2 Pi, ControlType -> Slider}] (*where to insert Sin[b]?*)
]
('a' has to initialized to some value for initial display).
But I am not able to figure how to use the second argument for 'b' in the above. Can't figure the syntax, and now sure if it possible?
of course, one can't just write
Manipulate[
a,
{Slider[Dynamic[b, (b = #; a = Sin[b]; #) &], {-2 Pi, 2 Pi}]}
]
This is not even valid syntax of Manipulate controls.
Question is: Is it possible to use second argument of Dynamic in setting up Manipulate controls?
The reason I am asking, is that it could make it easier to 'localize' computation right there, where the control variable changes, and only show the final result elsewhere. Like a local callback function in a way, where computation related to changes for each control sits right next to where the control is.
thanks
Update 9/16/11
Well, after a month of struggle, finally I have the Mathematica CDF up and running.
Thanks to the help of the trick shown here by Simon, and others who answered my questions while I was doing this CDF (Leonid, Heike, WReach, Belisarius and others).
The few tricks I learned here helped finish this demonstration, here is a link if someone wants to try it.
The design of this CDF is different from everything I've done before. It is based on finite state machine. There is only ONE tracked symbol in the whole CDF. Using the second argument of dynamics, the event name is recorded, and in the main expression of Manipulate, which runs the finite state machines, it has 5 states it can be in, and there are 8 events. Depending on the current state and the current event that just happened, it switches to new state (or can stay in the same state, depending), and the main display is updated. No trigger needed. The state machine runs as fast as you allow it, controlled only by the time step size.
This simplifies the logic greatly, and makes it possible to handle make much more advanced UI and inter-dependent logic, since everything now runs in a well controlled way, and all the logic is in one place.
Without being able to set the event using the second argument of dynamics, this whole approach would not have been possible.
I need to write a note on this method to make it more clear.
So, just wanted to thank everyone here. I am now finishing another CDF using this method, a triple pendulum simulation.
Not all of the arguments of Manipulate after the first one have to be Control objects. You can put anything you like there - including completely customized dynamic controls. So, how about something like
Manipulate[a, {a, None}, {b, None},
Row[{"b= ",Slider[Dynamic[b, (b = #; a = Sin[b]; #)&], {-2 Pi, 2 Pi}]}]]
Where the {a, None} and {b, None} ensure that the variables are local, but aren't strictly necessary.
I'm in love with Ruby. In this language all core functions are actually methods. That's why I prefer postfix notation – when the data, which I want to process is placed left from the body of anonymous processing function, for example: array.map{...}. I believe, that it has advantages in how easy is this code to read.
But Mathetica, being functional (yeah, it can be procedural if you want) dictates a style, where Function name is placed left from the data. As we can see in its manuals, // is used only when it's some simple Function, without arguments, like list // MatrixForm. When Function needs a lot of arguments, people who wrote manuals, use syntax F[data].
It would be okay, but my problem is the case F[f,data], for example Do[function, {x, a, b}]. Most of Mathematica functions (if not all) have arguments in exactly this order – [function, data], not [data, function]. As I prefer to use pure functions to keep namespace clean instead of creating a lot of named functions in my notebook, the argument function can be too big – so big, that argument data would be placed on the 5-20th line of code after the line with Function call.
This is why sometimes, when evil Ruby nature takes me under control, I rewrite such functions in postfix way:
Because it's important for me, that pure function (potentially big code) is placed right from processing data. Yeah I do it and I'm happy. But there are two things:
this causes Mathematica's highlighting parser problem: the x in postfix notation is highlighted with blue color, not turquoise;
everytime when I look into Mathematica manuals, I see examples like this one: Do[x[[i]] = (v[[i]] - U[[i, i + 1 ;; n]].x[[i + 1 ;; n]])/ U[[i, i]], {i, n, 1, -1}];, which means... hell, they think it's easy to read/support/etc.?!
So these two things made me ask this question here: am I so bad boy, that use my Ruby-style, and should I write code like these guys do, or is it OK, and I don't have to worry, and should write as I like to?
The style you propose is frequently possible, but is inadvisable in the case of Do. The problem is that Do has the attribute HoldAll. This is important because the loop variable (x in the example) must remain unevaluated and be treated as a local variable. To see this, try evaluating these expressions:
x = 123;
Do[Print[x], {x, 1, 2}]
(* prints 1 and 2 *)
{x, 1, 2} // Do[Print[x], #]&
(* error: Do::itraw: Raw object 123 cannot be used as an iterator.
Do[Print[x], {123, 1, 2}]
*)
The error occurs because the pure function Do[Print[x], #]& lacks the HoldAll attribute, causing {x, 1, 2} to be evaluated. You could solve the problem by explicitly defining a pure function with the HoldAll attribute, thus:
{x, 1, 2} // Function[Null, Do[Print[x], #], HoldAll]
... but I suspect that the cure is worse than the disease :)
Thus, when one is using "binding" expressions like Do, Table, Module and so on, it is safest to conform with the herd.
I think you need to learn to use the styles that Mathematica most naturally supports. Certainly there is more than one way, and my code does not look like everyone else's. Nevertheless, if you continue to try to beat Mathematica syntax into your own preconceived style, based on a different language, I foresee nothing but continued frustration for you.
Whitespace is not evil, and you can easily add line breaks to separate long arguments:
Do[
x[[i]] = (v[[i]] - U[[i, i + 1 ;; n]].x[[i + 1 ;; n]]) / U[[i, i]]
, {i, n, 1, -1}
];
This said, I like to write using more prefix (f # x) and infix (x ~ f ~ y) notation that I usually see, and I find this valuable because it is easy to determine that such functions are receiving one and two arguments respectively. This is somewhat nonstandard, but I do not think it is kicking over the traces of Mathematica syntax. Rather, I see it as using the syntax to advantage. Sometimes this causes syntax highlighting to fail, but I can live with that:
f[x] ~Do~ {x, 2, 5}
When using anything besides the standard form of f[x, y, z] (with line breaks as needed), you must be more careful of evaluation order, and IMHO, readability can suffer. Consider this contrived example:
{x, y} // # + 1 & ## # &
I do not find this intuitive. Yes, for someone intimate with Mathematica's order of operations, it is readable, but I believe it does not improve clarity. I tend to reserve // postfix for named functions where reading is natural:
Do[f[x], {x, 10000}] //Timing //First
I'd say it is one of the biggest mistakes to try program in a language B in ways idiomatic for a language A, only because you happen to know the latter well and like it. There is nothing wrong in borrowing idioms, but you have to make sure to understand the second language well enough so that you know why other people use it the way they do.
In the particular case of your example, and generally, I want to draw attention to a few things others did not mention. First, Do is a scoping construct which uses dynamic scoping to localize its iterator symbols. Therefore, you have:
In[4]:=
x=1;
{x,1,5}//Do[f[x],#]&
During evaluation of In[4]:= Do::itraw: Raw object
1 cannot be used as an iterator. >>
Out[5]= Do[f[x],{1,1,5}]
What a surprise, isn't it. This won't happen when you use Do in a standard fashion.
Second, note that, while this fact is largely ignored, f[#]&[arg] is NOT always the same as f[arg]. Example:
ClearAll[f];
SetAttributes[f, HoldAll];
f[x_] := Print[Unevaluated[x]]
f[5^2]
5^2
f[#] &[5^2]
25
This does not affect your example, but your usage is close enough to those cases affected by this, since you manipulate the scopes.
Mathematica supports 4 ways of applying a function to its arguments:
standard function form: f[x]
prefix: f#x or g##{x,y}
postfix: x // f, and
infix: x~g~y which is equivalent to g[x,y].
What form you choose to use is up to you, and is often an aesthetic choice, more than anything else. Internally, f#x is interpreted as f[x]. Personally, I primarily use postfix, like you, because I view each function in the chain as a transformation, and it is easier to string multiple transformations together like that. That said, my code will be littered with both the standard form and prefix form mostly depending on whim, but I tend to use standard form more as it evokes a feeling of containment with regards to the functions parameters.
I took a little liberty with the prefix form, as I included the shorthand form of Apply (##) alongside Prefix (#). Of the built in commands, only the standard form, infix form, and Apply allow you easily pass more than one variable to your function without additional work. Apply (e.g. g ## {x,y}) works by replacing the Head of the expression ({x,y}) with the function, in effect evaluating the function with multiple variables (g##{x,y} == g[x,y]).
The method I use to pass multiple variables to my functions using the postfix form is via lists. This necessitates a little more work as I have to write
{x,y} // f[ #[[1]], #[[2]] ]&
to specify which element of the List corresponds to the appropriate parameter. I tend to do this, but you could combine this with Apply like
{x,y} // f ## #&
which involves less typing, but could be more difficult to interpret when you read it later.
Edit: I should point out that f and g above are just placeholders, they can, and often are, replaced with pure functions, e.g. #+1& # x is mostly equivalent to #+1&[x], see Leonid's answer.
To clarify, per Leonid's answer, the equivalence between f#expr and f[expr] is true if f does not posses an attribute that would prevent the expression, expr, from being evaluated before being passed to f. For instance, one of the Attributes of Do is HoldAll which allows it to act as a scoping construct which allows its parameters to be evaluated internally without undo outside influence. The point is expr will be evaluated prior to it being passed to f, so if you need it to remain unevaluated, extra care must be taken, like creating a pure function with a Hold style attribute.
You can certainly do it, as you evidently know. Personally, I would not worry about how the manuals write code, and just write it the way I find natural and memorable.
However, I have noticed that I usually fall into definite patterns. For instance, if I produce a list after some computation and incidentally plot it to make sure it's what I expected, I usually do
prodListAfterLongComputation[
args,
]//ListPlot[#,PlotRange->Full]&
If I have a list, say lst, and I am now focusing on producing a complicated plot, I'll do
ListPlot[
lst,
Option1->Setting1,
Option2->Setting2
]
So basically, anything that is incidental and perhaps not important to be readable (I don't need to be able to instantaneously parse the first ListPlot as it's not the point of that bit of code) ends up being postfix, to avoid disrupting the already-written complicated code it is applied to. Conversely, complicated code I tend to write in the way I find easiest to parse later, which, in my case, is something like
f[
g[
a,
b,
c
]
]
even though it takes more typing and, if one does not use the Workbench/Eclipse plugin, makes it more work to reorganize code.
So I suppose I'd answer your question with "do whatever is most convenient after taking into account the possible need for readability and the possible loss of convenience such as code highlighting, extra work to refactor code etc".
Of course all this applies if you're the only one working with some piece of code; if there are others, it is a different question alltogether.
But this is just an opinion. I doubt it's possible for anybody to offer more than this.
For one-argument functions (f#(arg)), ((arg)//f) and f[arg] are completely equivalent even in the sense of applying of attributes of f. In the case of multi-argument functions one may write f#Sequence[args] or Sequence[args]//f with the same effect:
In[1]:= SetAttributes[f,HoldAll];
In[2]:= arg1:=Print[];
In[3]:= f#arg1
Out[3]= f[arg1]
In[4]:= f#Sequence[arg1,arg1]
Out[4]= f[arg1,arg1]
So it seems that the solution for anyone who likes postfix notation is to use Sequence:
x=123;
Sequence[Print[x],{x,1,2}]//Do
(* prints 1 and 2 *)
Some difficulties can potentially appear with functions having attribute SequenceHold or HoldAllComplete:
In[18]:= Select[{#, ToExpression[#, InputForm, Attributes]} & /#
Names["System`*"],
MemberQ[#[[2]], SequenceHold | HoldAllComplete] &][[All, 1]]
Out[18]= {"AbsoluteTiming", "DebugTag", "EvaluationObject", \
"HoldComplete", "InterpretationBox", "MakeBoxes", "ParallelEvaluate", \
"ParallelSubmit", "Parenthesize", "PreemptProtect", "Rule", \
"RuleDelayed", "Set", "SetDelayed", "SystemException", "TagSet", \
"TagSetDelayed", "Timing", "Unevaluated", "UpSet", "UpSetDelayed"}
I have been trying to decipher what this output means, but I just can't seem to figure it out. Does anybody know what is going on here?
I have even tried running the lines one by one and the errors only show up when the final line (show) is executed.
Stepping through the lines individually by themselves would not show you what is going on, you have to take apart the statement that is giving you the trouble: in this case, Show[p1, p2[1,1]. By themselves, neither p1 nor Show should give you trouble, which leads to the conclusion that it must be p2[1,1]. This is born out by running that by itself, which generates the same error.
This generates an error because of how Plot, Plot3D, etc evaluate the function argument. In general, they essentially do a Replace on the text of the function and may not expand function calls. A simple fix is to rewrite p2 as
p2[x0_, y0_] := Plot3D[Evaluate[p[x, y, x0, y0]], {x, 0, 2}, {y, 0, 2}]
which gets rid of the errors. Evaluate ensures that function is evaluated symbolically before Plot3D gets a hold of it avoiding any mishandling. I wish I had a better idea of when to use Evaluate in these cases, but if in general if you are getting errors from a plotting function like these, then it is most likely mishandling the function.
This is a split from discussion on earlier question.
Suppose I need to define a function f which checks if given labeling of a graph is a proper coloring. In other words, we have an integer assigned to every node and no two adjacent nodes get the same answer. For instance, for {"Path",3}, f[{1,2,3}] returns True and f[{1,1,2}] returns False. How would I go about creating such a function for arbitrary graph?
The following does essentially what I need, but generates Part warnings.
g[edges_] := Function ## {{x}, And ## (x[[First[#]]] != x[[Last[#]]] & /# edges)}
f = g[GraphData[{"Path", 3}, "EdgeIndices"]];
f[{1, 2, 1}]==False
This is a toy instance problem I regularly come across -- I need to programmatically create a multivariate function f, and end up with either 1) part warning 2) deferring evaluation of g until evaluation of f
Here's something. When nothing else is working, Hold and rules can usually get the job done. I'm not sure it produces the correct results w.r.t. your graph-coloring question but hopefully gives you a starting place. I ended up using Slot instead of named variable because there were some scoping issues (also present in my previous suggestion, x$ vs. x) when I used a named variable that I didn't spend the time trying to work around.
g[edges_] :=
With[{ors = (Hold ## edges) /. {a_, b_} :> #[[a]] == #[[b]]},
Function[!ors] /. Hold -> Or
]
In[90]:= f = g[GraphData[{"Path", 3}, "EdgeIndices"]]
Out[90]= !(#1[[1]] == #1[[2]] || #1[[2]] == #1[[3]]) &
In[91]:= f[{1, 2, 3}]
Out[91]= True
In[92]:= f[{1, 1, 2}]
Out[92]= False
I feel like it lacks typical Mathematica elegance, but it works. I'll update if I'm inspired with something more beautiful.
There are a couple other solutions to this kind of problem which don't require you to use Hold or ReleaseHold, but instead rely on the fact that Function already has the HoldAll attribute. You first locally "erase" the definitions of Part using Block, so the expression you're interested in can be safely constructed, and then uses With to interpolate that into a Function which can then be safely returned outside of the Block, and also uses the fact that Slot doesn't really mean anything outside of Function.
Using your example:
coloringChecker[edges_List] :=
Block[{Part},
With[{body = And ## Table[#[[First#edge]] != #[[Last#edge]], {edge, edges}]},
body &]]
I don't know if this is all that much less cumbersome than using Hold, but it is different.
I'm still puzzled by the difficulties that you seem to be having. Here's a function which checks that no 2 consecutive elements in a list are identical:
f[l_List] := Length[Split[l]] == Length[l]
No trouble with Part, no error messages for the simple examples I've tried so far including OP's 'test' cases. I also contend that this is more concise and more readable than either of the other approaches seen so far.