Mathematica's puzzling interpretation of #^2 & /# Range[n] - wolfram-mathematica

I'm puzzled by Mathematica's responses to the following:
ClearAll[n]
#^2 & /# Range[n]
#^2 & /# Range[n] // StandardForm
It seems that even Mathematica (8.0) doesn't believe what it has just said:
#^2 & /# Range[5]
Range[5^2]
Any thoughts about what is happening?
Edit:
The original context for this question was the following. I had written
PrimeOmega[Range[n]] - PrimeNu[Range[n]]
and since n was going to be very large (2^50), I thought I might save time by rewriting it as:
PrimeOmega[#] - PrimeNu[#] &/#Range[n]
Thinking back, that probably wasn't such a good idea. (I could have used Module to 'compute' the Range only once.)

Since n is undefined, Range[n] evaluated to itself. Therefore, Map acts on it as on any other symbolic head, mapping your function on its elements - here it is just n
In[11]:= #^2 & /# someHead[n]
Out[11]= someHead[n^2]
EDIT
Addressing the question in your edit - for numeric n, Range evaluates to a list all right, and you get the expected result (which is, Range[5]^2. It is all about the order of evaluation. To get Range[5^2], you could have used #^2&/#Unevaluated[Range[5]], in which case everything happens just like for symbolic n above) . In fact, Range issues an error message on non-numeric input. Also, it is tangential to the question, but functions like #^2& are Listable, and you don't have to map them.

Slightly off topic, but you can improve the speed by redefining in terms of FactorInteger, which then is only called once per input.
f1[n_] := PrimeOmega[Range[n]] - PrimeNu[Range[n]]
f2[n_] := With[{fax=FactorInteger[#]}, Total[fax[[All,2]]]-Length[fax]]& /# Range[n]
Example:
In[27]:= Timing[pdiff1 = f1[2^20];]
Out[27]= {37.730264, Null}
In[28]:= Timing[pdiff2 = f2[2^20];]
Out[28]= {9.364576, Null}
In[29]:= pdiff1===pdiff2
Out[29]= True
Daniel Lichtblau

Related

How can I get mathematica to use the chain rule?

I want to teach mathematica that Subscript[w, i] differentiated by Subscript[w, j] is KroneckerDelta[i, j].
I tried
Unprotect[D]; D[Subscript[x_, i_], Subscript[x_, j_]] :=
KroneckerDelta[i, j]; Protect[D]
And this works with D[Subscript[w, i], Subscript[w, j]], but not with more complex expressions, e.g. D[Times[k, Subscript[w, i]], Subscript[w, j]]
I understand from the answer to this question: How to define a function that commutes with D in Mathematica that mathematica isn't matching my rule, but I don't understand why. Why does Mathematica not use the product rule, and then invoke my rule?
I think I see now what is happening.
Mathematica does not recursively define the D operator using the chain rule, presumably because this is too slow. It does some pattern matching on subexpressions to see if they contain the variable of differentiation, and subexpressions which don't are treated as constants; so my pattern for D is never applied.
The way around this turns out to be to tell Mathematica explicitly that
Subscript[w, i]
is not a constant.
My pattern now looks like this
Unprotect[D];
D[Subscript[x_, i_], Subscript[x_, j_],
NonConstants -> {___, Subscript[x_, i_], ___} ] := KroneckerDelta[i, j];
Protect[D]
And I have to apply it this way:
D[k * Subscript[w, i], Subscript[w, j], NonConstants -> Subscript[w, i]]
Mathematica does not have "mathematical maturity" which means it is not a competent graduate student who can look at your request, figure out what you actually meant and give you what you should get. In particular, pattern matching is "structural", which means it just literally matches exactly the structure of what the rules say. That means it doesn't realize that you probably meant k to be a constant and thus understand that the derivative of a constant times a function should be the constant times the derivative.
You can add more and more rules to try to simulate mathematical maturity, but many typical users don't have all the skills needed to exactly correctly write all the rules needed. You can include:
Unprotect[D];
D[Times[k_, Subscript[w_, i_]], Subscript[x_, j_]] := k*KroneckerDelta[i, j];
Protect[D];
D[Times[k, Subscript[w, i]], Subscript[x_, j_]]
but that is assuming that k is free of Subscript[w, i]] and you may want to enhance that rule with a condition ensuring that. This still doesn't deal with k+KroneckerDelta[i, j] or k*KroneckerDelta[i, j]+m, etc.
First of all, it's usually a bad idea to Unprotect symbols in Mathematica in order to add DownValues. In addition to the reasons Bill gave, this forces D to check that the arguments you give it doesn't match your patterns, before it can do its normal work. That can slow the system down.
You can often get away with UpValues (via UpSetDelayed) instead of DownValues. That doesn't require that you Unprotect, and it causes your rules to fire only when they apply.
In this case, though, I think you just want a custom differentiator on top of D:
myD[f_, Subscript[x_, i_]] :=
With[{vars = DeleteDuplicates[Cases[f, Subscript[x, _], {0, Infinity}]]},
Sum[D[f, v]*KroneckerDelta[i, v[[2]]], {v, vars}]
]
(if I'm not missing something crucial.)

Using All in MapAt in Mathematica

I often have a list of pairs, as
data = {{0,0.0},{1,12.4},{2,14.6},{3,25.1}}
and I want to do something, for instance Rescale, to all of the second elements without touching the first elements. The neatest way I know is:
Transpose[MapAt[Rescale, Transpose[data], 2]]
There must be a way to do this without so much Transposeing. My wish is for something like this to work:
MapAt[Rescale, data, {All, 2}]
But my understanding is that MapAt takes Position-style specifications instead of Part-style specifications. What's the proper solution?
To clarify,
I'm seeking a solution where I don't have to repeat myself, so lacking double Transpose or double [[All,2]], because I consider repetition a signal I'm not doing something the easiest way. However, if eliminating the repetition requires the introduction of intermediate variables or a named function or other additional complexity, maybe the transpose/untranspose solution is already correct.
Use Part:
data = {{0, 0.0}, {1, 12.4}, {2, 14.6}, {3, 25.1}}
data[[All, 2]] = Rescale # data[[All, 2]];
data
Create a copy first if you need to. (data2 = data then data2[[All, 2]] etc.)
Amending my answer to keep up with ruebenko's, this can be made into a function also:
partReplace[dat_, func_, spec__] :=
Module[{a = dat},
a[[spec]] = func # a[[spec]];
a
]
partReplace[data, Rescale, All, 2]
This is quite general is design.
I am coming late to the party, and what I will describe will differ very little with what #Mr. Wizard has, so it is best to consider this answer as a complementary to his solution. My partial excuses are that first, the function below packages things a bit differently and closer to the syntax of MapAt itself, second, it is a bit more general and has an option to use with Listable function, and third, I am reproducing my solution from the past Mathgroup thread for exactly this question, which is more than 2 years old, so I am not plagiarizing :)
So, here is the function:
ClearAll[mapAt,MappedListable];
Protect[MappedListable];
Options[mapAt] = {MappedListable -> False};
mapAt[f_, expr_, {pseq : (All | _Integer) ..}, OptionsPattern[]] :=
Module[{copy = expr},
copy[[pseq]] =
If[TrueQ[OptionValue[MappedListable]] && Head[expr] === List,
f[copy[[pseq]]],
f /# copy[[pseq]]
];
copy];
mapAt[f_, expr_, poslist_List] := MapAt[f, expr, poslist];
This is the same idea as what #Mr. Wizard used, with these differences: 1. In case when the spec is not of the prescribed form, regular MapAt will be used automatically 2. Not all functions are Listable. The solution of #Mr.Wizard assumes that either a function is Listable or we want to apply it to the entire list. In the above code, you can specify this by the MappedListable option.
I will also borrow a few examples from my answer in the above-mentioned thread:
In[18]:= mat=ConstantArray[1,{5,3}];
In[19]:= mapAt[#/10&,mat,{All,3}]
Out[19]= {{1,1,1/10},{1,1,1/10},{1,1,1/10},{1,1,1/10},{1,1,1/10}}
In[20]:= mapAt[#/10&,mat,{3,All}]
Out[20]= {{1,1,1},{1,1,1},{1/10,1/10,1/10},{1,1,1},{1,1,1}}
Testing on large lists shows that using Listability improves the performance, although not so dramatically here:
In[28]:= largemat=ConstantArray[1,{150000,15}];
In[29]:= mapAt[#/10&,largemat,{All,3}];//Timing
Out[29]= {0.203,Null}
In[30]:= mapAt[#/10&,largemat,{All,3},MappedListable->True];//Timing
Out[30]= {0.094,Null}
This is likely because for the above function (#/10&), Map (which is used internally in mapAt for the MappedListable->False (default) setting, was able to auto-compile. In the example below, the difference is more substantial:
ClearAll[f];
f[x_] := 2 x - 1;
In[54]:= mapAt[f,largemat,{All,3}];//Timing
Out[54]= {0.219,Null}
In[55]:= mapAt[f,largemat,{All,3},MappedListable->True];//Timing
Out[55]= {0.031,Null}
The point is that, while f was not declared Listable, we know that its body is built out of Listable functions, and thus it can be applied to the entire list - but OTOH it can not be auto-compiled by Map. Note that adding Listable attribute to f would have been completely wrong here and would destroy the purpose, leading to mapAt being slow in both cases.
How about
Transpose[{#[[All, 1]], Rescale[#[[All, 2]]]} &#data]
which returns what you want (ie, it does not alter data)
If no Transpose is allowed,
Thread[Join[{#[[All, 1]], Rescale[#[[All, 2]]]} &#data]]
works.
EDIT: As "shortest" is now the goal, best from me so far is:
data\[LeftDoubleBracket]All, 2\[RightDoubleBracket] = Rescale[data[[All, 2]]]
at 80 characters, which is identical to Mr.Wizard's... So vote for his answer.
Here is another approach:
op[data_List, fun_] :=
Join[data[[All, {1}]], fun[data[[All, {2}]]], 2]
op[data, Rescale]
Edit 1:
An extension from Mr.Wizard, that does not copy it's data.
SetAttributes[partReplace, HoldFirst]
partReplace[dat_, func_, spec__] := dat[[spec]] = func[dat[[spec]]];
used like this
partReplace[data, Rescale, All, 2]
Edit 2:
Or like this
ReplacePart[data, {All, 2} -> Rescale[data[[All, 2]]]]
This worked for me and a friend
In[128]:= m = {{x, sss, x}, {y, sss, y}}
Out[128]= {{2, sss, 2}, {y, sss, y}}
In[129]:= function[ins1_] := ToUpperCase[ins1];
fatmap[ins2_] := MapAt[function, ins2, 2];
In[131]:= Map[fatmap, m]
Out[131]= {{2, ToUpperCase[sss], 2}, {y, ToUpperCase[sss], y}}

picking specific symbol definitions in mathematica (not transformation rules)

I have a following problem.
f[1]=1;
f[2]=2;
f[_]:=0;
dvs = DownValues[f];
this gives
dvs =
{
HoldPattern[f[1]] :> 1,
HoldPattern[f[2]] :> 2,
HoldPattern[f[_]] :> 0
}
My problem is that I would like to extract only definitions for f[1] and f[2] etc but not the general definition f[_], and I do not know how to do this.
I tried,
Cases[dvs, HoldPattern[ f[_Integer] :> _ ]] (*)
but it gives me nothing, i.e. the empty list.
Interestingly, changing HoldPattern into temporary^footnote
dvs1 = {temporary[1] :> 1, temporary[2] :> 2, temporary[_] :> 0}
and issuing
Cases[dvs1, HoldPattern[temporary[_Integer] :> _]]
gives
{temporary[1] :> 1, temporary[2] :> 2}
and it works. This means that (*) is almost a solution.
I do not not understand why does it work with temporary and not with HoldPattern? How can I make it work directly with HoldPattern?
Of course, the question is what gets evaluated and what not etc. The ethernal problem when coding in Mathematica. Something for real gurus...
With best regards
Zoran
footnote = I typed it by hand as replacement "/. HoldPattern -> temporary" actually executes the f[_]:=0 rule and gives someting strange, this excecution I certainly would like to avoid.
The reason is that you have to escape the HoldPattern, perhaps with Verbatim:
In[11]:= Cases[dvs,
Verbatim[RuleDelayed][
Verbatim[HoldPattern][HoldPattern[f[_Integer]]], _]]
Out[11]= {HoldPattern[f[1]] :> 1, HoldPattern[f[2]] :> 2}
There are just a few heads for which this is necessary, and HoldPattern is one of them, precisely because it is normally "invisible" to the pattern-matcher. For your temporary, or other heads, this wouldn't be necessary. Note by the way that the pattern f[_Integer] is wrapped in HoldPattern - this time HoldPattern is used for its direct purpose - to protect the pattern from evaluation. Note that RuleDelayed is also wrapped in Verbatim - this is in fact another common case for Verbatim - this is needed because Cases has a syntax involving a rule, and we do not want Cases to use this interpretation here. So, this is IMO an overall very good example to illustrate both HoldPattern and Verbatim.
Note also that it is possible to achieve the goal entirely with HoldPattern, like so:
In[14]:= Cases[dvs,HoldPattern[HoldPattern[HoldPattern][f[_Integer]]:>_]]
Out[14]= {HoldPattern[f[1]]:>1,HoldPattern[f[2]]:>2}
However, using HoldPattern for escaping purposes (in place of Verbatim) is IMO conceptually wrong.
EDIT
To calrify a little the situation with Cases, here is a simple example where we use the syntax of Cases involving transformation rules. This extended syntax instructs Cases to not only find and collect matching pieces, but also transform them according to the rules, right after they were found, so the resulting list contains the transformed pieces.
In[29]:= ClearAll[a, b, c, d, e, f];
Cases[{a, b, c, d, e, f}, s_Symbol :> s^2]
Out[30]= {a^2, b^2, c^2, d^2, e^2, f^2}
But what if we need to find elements that are themselves rules? If we just try this:
In[33]:= Cases[{a:>b,c:>d,e:>f},s_Symbol:>_]
Out[33]= {}
It doesn't work since Cases interprets the rule in the second argument as an instruction to use extended syntax, find a symbol and replace it with _. Since it searches on level 1 by default, and symbols are on level 2 here, it finds nothing. Observe:
In[34]:= Cases[{a:>b,c:>d,e:>f},s_Symbol:>_,{2}]
Out[34]= {_,_,_,_,_,_}
In any case, this is not what we wanted. Therefore, we have to force Cases to consider the second argument as a plain pattern (simple, rather than extended, syntax). There are several ways to do that, but all of them "escape" RuleDelayed (or Rule) in some way:
In[37]:= Cases[{a:>b,c:>d,e:>f},(s_Symbol:>_):>s]
Out[37]= {a,c,e}
In[38]:= Cases[{a:>b,c:>d,e:>f},Verbatim[RuleDelayed][s_Symbol,_]:>s]
Out[38]= {a,c,e}
In[39]:= Cases[{a:>b,c:>d,e:>f},(Rule|RuleDelayed)[s_Symbol,_]:>s]
Out[39]= {a,c,e}
In all cases, we either avoid the extended syntax for Cases (last two examples), or manage to use it to our advantage (first case).
Leonid, of course, completely answered the question about why your temporary solution works but HoldPattern does not. However, as an answer to your original problem of extracting the f[1] and f[2] type terms, his code is a bit ugly. To solve just the problem of extracting these terms, I would just concentrate on the structure of the left-hand-side of the definition and use the fact that FreeQ searches at all levels. So, defining
f[1] = 1; f[2] = 2; f[_] := 0;
dvs = DownValues[f];
All of the following
Select[dvs, FreeQ[#, Verbatim[_]] &]
Select[dvs, FreeQ[#, Verbatim[f[_]]] &]
Select[dvs, ! FreeQ[#, HoldPattern[f[_Integer]]] &]
yield the result
{HoldPattern[f[1]] :> 1, HoldPattern[f[2]] :> 2}
Provided there are no f[...] (or, for the first version, Blank[]) terms on the right-hand-side of the downvalues of f, then one of the above will probably be suitable.
Based on Simon's excellent solution here, I suggest:
Cases[DownValues[f], _?(FreeQ[#[[1]], Pattern | Blank] &)]

Complex Error Function in Mathematica

The complex error function w(z) is defined as e^(-x^2) erfc(-ix). The problem with using w(z) as defined above is that the erfc tends to explode out for larger x (complemented by the exponential going to 0 so everything stays small), so that Mathematica reverts to arbitrary precision calculations that make life VERY slow. The function is used in implementing the voigt profile - a line shape commonly used in spectroscopy and other related areas. Right now I'm reverting to calculating the lineshape once and using an interpolation to speed things up, however this doesn't let me alter the parameters of the lineshape (or fit to them) easily.
scipy has a nice and fast implementation of w(z) as scipy.special.wofz, and I was wondering if there is an equivalent in Mathematica.
The complex error function can be written in terms of the Hermite "polynomial" H_{-1}(x):
In[1]:= FullSimplify[2 HermiteH[-1,I x] == Sqrt[Pi] Exp[-x^2] Erfc[I x]]
Out[1]= True
And the evaluation does not suffer as many underflows and overflows
In[68]:= 2 HermiteH[-1, I x] /. x -> 100000.
Out[68]= 6.12323*10^-22 - 0.00001 I
In[69]:= Sqrt[Pi] E^-x^2 Erfc[I x] /. x -> 100000.
During evaluation of In[69]:= General::unfl: Underflow occurred in computation. >>
During evaluation of In[69]:= General::ovfl: Overflow occurred in computation. >>
Out[69]= Indeterminate
That said, some quick tests show that the evaluation speed of the Hermite function to be slower than that of the product of the exponential and error function...
A series expansion at infinity shows that the real and imaginary parts are of very different scales. I'd suggest computing them separately and not adding them. Below I use the first few terms of the series expansion to get the imaginary part.
In[186]:=
w[x_?NumericQ] := {N[Exp[-SetPrecision[x, 25]^2], 20],
N[(3 /(4 Sqrt[\[Pi]] x^5) + 1/(2 Sqrt[\[Pi]] x^3) + 1/(
Sqrt[\[Pi]] x))]}
In[187]:= w[11]
Out[187]= {2.8207700884601354011*10^-53, 0.05150453151309212}
In[188]:= w[1000]
Out[188]= {3.296831478088558579*10^-434295, 0.0005641898656429712}
Not sure how badly you want that very small real part. If you can drop it that will keep the numbers in a reasonable range. In some ranges (or if higher than machine precision is desired) you may want to use more terms from the expansion on that imaginary part.
Daniel Lichtblau
Wolfram Research
The real and imaginary parts of the complex error function on the real line can be explicitly and efficiently computed in Mathematica using Dawson integral:
In[9]:= Sqrt[Pi] Exp[-x^2] Erfc[I x] ==
E^-x^2 Sqrt[\[Pi]] - 2 I DawsonF[x] // FullSimplify
Out[9]= True
This is about 4 times faster than using HermiteH[-1,z].
In[10]:= w1[x_] := E^-x^2 Sqrt[\[Pi]] - 2 I DawsonF[x]
w2[x_] := 2 HermiteH[-1, I x]
In[15]:= AbsoluteTiming[w1 /# Range[-5.0, 5.0, 0.001];]
Out[15]= {2.3272327, Null}
In[16]:= AbsoluteTiming[w2 /# Range[-5.0, 5.0, 0.001];]
Out[16]= {10.2400239, Null}
A program in language C for the complex error function (aka the Faddeeva function) that can be run from Mathematica is also available in RooFit. Read the article by Karbach et al. arXiv:1407.0748 for more information.
Just wrap the C library libcerf.

Mathematica: reconstruct an arbitrary nested list after Flatten

What is the simplest way to map an arbitrarily funky nested list expr to a function unflatten so that expr==unflatten##Flatten#expr?
Motivation:
Compile can only handle full arrays (something I just learned -- but not from the error message), so the idea is to use unflatten together with a compiled version of the flattened expression:
fPrivate=Compile[{x,y},Evaluate#Flatten#expr];
f[x_?NumericQ,y_?NumericQ]:=unflatten##fPrivate[x,y]
Example of a solution to a less general problem:
What I actually need to do is to calculate all the derivatives for a given multivariate function up to some order. For this case, I hack my way along like so:
expr=Table[D[x^2 y+y^3,{{x,y},k}],{k,0,2}];
unflatten=Module[{f,x,y,a,b,sslot,tt},
tt=Table[D[f[x,y],{{x,y},k}],{k,0,2}] /.
{Derivative[a_,b_][_][__]-> x[a,b], f[__]-> x[0,0]};
(Evaluate[tt/.MapIndexed[#1->sslot[#2[[1]]]&,
Flatten[tt]]/. sslot-> Slot]&) ]
Out[1]= {x^2 y + y^3, {2 x y, x^2 + 3 y^2}, {{2 y, 2 x}, {2 x, 6 y}}}
Out[2]= {#1, {#2, #3}, {{#4, #5}, {#5, #7}}} &
This works, but it is neither elegant nor general.
Edit: Here is the "job security" version of the solution provided by aaz:
makeUnflatten[expr_List]:=Module[{i=1},
Function#Evaluate#ReplaceAll[
If[ListQ[#1],Map[#0,#1],i++]&#expr,
i_Integer-> Slot[i]]]
It works a charm:
In[2]= makeUnflatten[expr]
Out[2]= {#1,{#2,#3},{{#4,#5},{#6,#7}}}&
You obviously need to save some information about list structure, because Flatten[{a,{b,c}}]==Flatten[{{a,b},c}].
If ArrayQ[expr], then the list structure is given by Dimensions[expr] and you can reconstruct it with Partition. E.g.
expr = {{a, b, c}, {d, e, f}};
dimensions = Dimensions[expr]
{2,3}
unflatten = Fold[Partition, #1, Reverse[Drop[dimensions, 1]]]&;
expr == unflatten # Flatten[expr]
(The Partition man page actually has a similar example called unflatten.)
If expr is not an array, you can try this:
expr = {a, {b, c}};
indexes = Module[{i=0}, If[ListQ[#1], Map[#0, #1], ++i]& #expr]
{1, {2, 3}}
slots = indexes /. {i_Integer -> Slot[i]}
{#1, {#2, #3}}
unflatten = Function[Release[slots]]
{#1, {#2, #3}} &
expr == unflatten ## Flatten[expr]
I am not sure what you are trying to do with Compile. It is used when you want to evaluate procedural or functional expressions very quickly on numerical values, so I don't think it is going to help here. If repeated calculations of D[f,...] are impeding your performance, you can precompute and store them with something like
Table[d[k]=D[f,{{x,y},k}],{k,0,kk}];
Then just call d[k] to get the kth derivative.
I just wanted to update the excellent solutions by aaz and Janus. It seems that, at least in Mathematica 9.0.1.0 on Mac OSX, the assignment (see aaz's solution)
{i_Integer -> Slot[i]}
fails. If, however, we use
{i_Integer :> Slot[i]}
instead, we succeed. The same holds, of course, for the ReplaceAll call in Janus's "job security" version.
For good measure, I include my own function.
unflatten[ex_List, exOriginal_List] :=
Module[
{indexes, slots, unflat},
indexes =
Module[
{i = 0},
If[ListQ[#1], Map[#0, #1], ++i] &#exOriginal
];
slots = indexes /. {i_Integer :> Slot[i]};
unflat = Function[Release[slots]];
unflat ## ex
];
(* example *)
expr = {a, {b, c}};
expr // Flatten // unflatten[#, expr] &
It might seem a little like a cheat to use the original expression in the function, but as aaz points out, we need some information from the original expression. While you don't need it all, in order to have a single function that can unflatten, all is necessary.
My application is similar to Janus's: I am parallelizing calls to Simplify for a tensor. Using ParallelTable I can significantly improve performance, but I wreck the tensor structure in the process. This gives me a quick way to reconstruct my original tensor, simplified.

Resources