Instability in DeleteDuplicates and Tally - wolfram-mathematica

While preparing an answer to Count how many different values a list takes in Mathematica I came across an instability (for lack of a better term) in both DeleteDuplicates and Tally that I do not understand.
Consider first:
a = {2.2000000000000005, 2.2, 2.1999999999999999};
a // InputForm
DeleteDuplicates#a // InputForm
Union#a // InputForm
Tally#a // InputForm
{2.2000000000000006`, 2.2, 2.1999999999999997`}
{2.2000000000000006`, 2.2, 2.1999999999999997`}
{2.1999999999999997`, 2.2, 2.2000000000000006`}
{{2.2000000000000006`, 3}}
This behavior is as I expected in each case. Tally compensates for the slight numerical differences and sees each element as being equivalent. Union and DeleteDuplicates see all elements as unique. (This behavior of Tally is not documented to my knowledge, but I have made use of it before.)
Now, consider this complication:
a = {11/5, 2.2000000000000005, 2.2, 2.1999999999999997};
a // InputForm
DeleteDuplicates#a // InputForm
Union#a // InputForm
Tally#a // InputForm
{11/5, 2.2000000000000006, 2.2, 2.1999999999999997}
{11/5, 2.2000000000000006, 2.2}
{2.1999999999999997, 2.2, 11/5, 2.2000000000000006}
{{11/5, 1}, {2.2000000000000006, 1}, {2.2, 2}}
The output of Union is as anticipated, but the results from both DeleteDuplicates and Tally are surprising.
Why does DeleteDuplicates suddenly see 2.1999999999999997 as a duplicate to be eliminated?
Why does Tally suddenly see 2.2000000000000006 and 2.2 as distinct, when it did not before?
As a related point, it can be seen that packed arrays affect Tally:
a = {2.2000000000000005, 2.2, 2.1999999999999999};
a // InputForm
Tally#a // InputForm
{2.2000000000000006, 2.2, 2.1999999999999997}
{{2.2000000000000006`, 3}}
a = Developer`ToPackedArray#a;
a // InputForm
Tally#a // InputForm
{2.2000000000000006, 2.2, 2.1999999999999997}
{{2.2000000000000006`, 1}, {2.2, 2}}

The exhibited behaviour appears to be the result of a the usual woes associated with floating point arithmetic coupled with some questionable behaviour in some of the functions under discussion.
SameQ Is Not An Equivalence Relation
First on the slate: consider that SameQ is not an equivalence relation because it is not transitive:
In[1]:= $a = {11/5, 2.2000000000000005, 2.2, 2.1999999999999997};
In[2]:= SameQ[$a[[2]], $a[[3]]]
Out[2]= True
In[3]:= SameQ[$a[[3]], $a[[4]]]
Out[3]= True
In[4]:= SameQ[$a[[2]], $a[[4]]]
Out[4]= False (* !!! *)
So right out the gate, we are faced with erratic behaviour even before turning to the other functions.
This behaviour is due to the documented rule for SameQ that says that two real numbers are treated as "equal" if they "differ in their last binary digit":
In[5]:= {# // InputForm, Short#RealDigits[#, 2][[1, -10;;]]} & /# $a[[2;;4]] // TableForm
(* showing only the last ten binary digits for each *)
Out[5]//TableForm= 2.2000000000000006 {0,1,1,0,0,1,1,0,1,1}
2.2 {0,1,1,0,0,1,1,0,1,0}
2.1999999999999997 {0,1,1,0,0,1,1,0,0,1}
Note that, strictly speaking, $a[[3]] and $a[[4]] differ in the last two binary digits, but the magnitude of the difference is one bit of the lowest order.
DeleteDuplicates Does Not Really Use SameQ
Next, consider that the documentation states that DeleteDuplicates[...] is equivalent to DeleteDuplicates[..., SameQ]. Well, that is strictly true -- but probably not in the sense that you might expect:
In[6]:= DeleteDuplicates[$a] // InputForm
Out[6]//InputForm= {11/5, 2.2000000000000006, 2.2}
In[7]:= DeleteDuplicates[$a, SameQ] // InputForm
Out[7]//InputForm= {11/5, 2.2000000000000006, 2.2}
The same, as documented... but what about this:
In[8]:= DeleteDuplicates[$a, SameQ[#1, #2]&] // InputForm
Out[8]//InputForm= {11/5, 2.2000000000000006, 2.1999999999999997}
It appears that DeleteDuplicates goes through a different branch of logic when the comparison function is manifestly SameQ as opposed to a function whose behaviour is identical to SameQ.
Tally is... Confused
Tally shows similar, but not identical, erratic behaviour:
In[9]:= Tally[$a] // InputForm
Out[9]//InputForm= {{11/5, 1}, {2.2000000000000006, 1}, {2.2, 2}}
In[10]:= Tally[$a, SameQ] // InputForm
Out[10]//InputForm= {{11/5, 1}, {2.2000000000000006, 1}, {2.2, 2}}
In[11]:= Tally[$a, SameQ[#1, #2]&] // InputForm
Out[11]//InputForm= {{11/5, 1}, {2.2000000000000006, 1}, {2.2000000000000006, 2}}
That last is particularly baffling, since the same number appears twice in the list with different counts.
Equal Suffers Similar Problems
Now, back to the problem of floating point equality. Equal fares a little bit better than SameQ -- but emphasis on "little". Equal looks at the last seven binary digits instead of the last one. That doesn't fix the problem, though... troublesome cases can always be found:
In[12]:= $x1 = 0.19999999999999823;
$x2 = 0.2;
$x3 = 0.2000000000000018;
In[15]:= Equal[$x1, $x2]
Out[15]= True
In[16]:= Equal[$x2, $x3]
Out[16]= True
In[17]:= Equal[$x1, $x3]
Out[17]= False (* Oops *)
The Villain Unmasked
The main culprit in all of this discussion is the floating-point real number format. It is simply not possible to represent arbitrary real numbers in full fildelity using a finite format. This is why Mathematica stresses symbolic form and makes every possible attempt to work with expressions in symbolic form for as long as possible. If one finds numeric forms to be unavoidable, then one must wade into that swamp called numerical analysis to sort out all of the corner cases involving equality and inequality.
Poor SameQ, Equal, DeleteDuplicates, Tally and all of their friends never stood a chance.

In my opinion, relying on anything for Tally or DeleteDuplicates with default (SameQ-like based) comparison function and numerical values is relying on implementation details, because SameQ does not have a well-defined semantics on numerical values. What you see is what is normally called "undefined behavior" in other languages. What one should be doing to get robust results is to use
DeleteDuplicates[a,Equal]
or
Tally[a,Equal]
and similarly for Union (although I would not use Union since explicit test leads to quadratic complexity for it). OTOH, if your desire is to understand the internal implementation details because you want to make use of them, I can not say much except to warn that this may cause more harm than good, particularly because these implementations are subject to change from version to version - even assuming that you get all their details right for some particular version.

Related

On PackedArray, looking for advice for using them

I have not used PackedArray before, but just started looking at using them from reading some discussion on them here today.
What I have is lots of large size 1D and 2D matrices of all reals, and no symbolic (it is a finite difference PDE solver), and so I thought that I should take advantage of using PackedArray.
I have an initialization function where I allocate all the data/grids needed. So I went and used ToPackedArray on them. It seems a bit faster, but I need to do more performance testing to better compare speed before and after and also compare RAM usage.
But while I was looking at this, I noticed that some operations in M automatically return lists in PackedArray already, and some do not.
For example, this does not return packed array
a = Table[RandomReal[], {5}, {5}];
Developer`PackedArrayQ[a]
But this does
a = RandomReal[1, {5, 5}];
Developer`PackedArrayQ[a]
and this does
a = Table[0, {5}, {5}];
b = ListConvolve[ {{0, 1, 0}, {1, 4, 1}, {0, 1, 1}}, a, 1];
Developer`PackedArrayQ[b]
and also matrix multiplication does return result in packed array
a = Table[0, {5}, {5}];
b = a.a;
Developer`PackedArrayQ[b]
But element wise multiplication does not
b = a*a;
Developer`PackedArrayQ[b]
My question : Is there a list somewhere which documents which M commands return PackedArray vs. not? (assuming data meets the requirements, such as Real, not mixed, no symbolic, etc..)
Also, a minor question, do you think it will be better to check first if a list/matrix created is already packed before calling calling ToPackedArray on it? I would think calling ToPackedArray on list already packed will not cost anything, as the call will return right away.
thanks,
update (1)
Just wanted to mention, that just found that PackedArray symbols not allowed in a demo CDF as I got an error uploading one with one. So, had to remove all my packing code out. Since I mainly write demos, now this topic is just of an academic interest for me. But wanted to thank everyone for time and good answers.
There isn't a comprehensive list. To point out a few things:
Basic operations with packed arrays will tend to remain packed:
In[66]:= a = RandomReal[1, {5, 5}];
In[67]:= Developer`PackedArrayQ /# {a, a.a, a*a}
Out[67]= {True, True, True}
Note above that that my version (8.0.4) doesn't unpack for element-wise multiplication.
Whether a Table will result in a packed array depends on the number of elements:
In[71]:= Developer`PackedArrayQ[Table[RandomReal[], {24}, {10}]]
Out[71]= False
In[72]:= Developer`PackedArrayQ[Table[RandomReal[], {24}, {11}]]
Out[72]= True
In[73]:= Developer`PackedArrayQ[Table[RandomReal[], {25}, {10}]]
Out[73]= True
On["Packing"] will turn on messages to let you know when things unpack:
In[77]:= On["Packing"]
In[78]:= a = RandomReal[1, 10];
In[79]:= Developer`PackedArrayQ[a]
Out[79]= True
In[80]:= a[[1]] = 0 (* force unpacking due to type mismatch *)
Developer`FromPackedArray::punpack1: Unpacking array with dimensions {10}. >>
Out[80]= 0
Operations that do per-element inspection will usually unpack the array,
In[81]:= a = RandomReal[1, 10];
In[82]:= Position[a, Max[a]]
Developer`FromPackedArray::unpack: Unpacking array in call to Position. >>
Out[82]= {{4}}
There penalty for calling ToPackedArray on an already packed list is small enough that I wouldn't worry about it too much:
In[90]:= a = RandomReal[1, 10^7];
In[91]:= Timing[Do[Identity[a], {10^5}];]
Out[91]= {0.028089, Null}
In[92]:= Timing[Do[Developer`ToPackedArray[a], {10^5}];]
Out[92]= {0.043788, Null}
The frontend prefers packed to unpacked arrays, which can show up when dealing with Dynamic and Manipulate:
In[97]:= Developer`PackedArrayQ[{1}]
Out[97]= False
In[98]:= Dynamic[Developer`PackedArrayQ[{1}]]
Out[98]= True
When looking into performance, focus on cases where large lists are getting unpacked, rather than the small ones. Unless the small ones are in big loops.
This is just an addendum to Brett's answer:
SystemOptions["CompileOptions"]
will give you the lengths being used for which a function will return a packed array. So if you did need to pack a small list, as an alternative to using Developer`ToPackedArray you could temporarily set a smaller number for one of the compile options. e.g.
SetSystemOptions["CompileOptions" -> {"TableCompileLength" -> 20}]
Note also some difference between functions which to me at least doesn't seem intuitive so I generally have to test these kind of things whenever I use them rather than instinctively knowing what will work best:
f = # + 1 &;
g[x_] := x + 1;
data = RandomReal[1, 10^6];
On["Packing"]
Timing[Developer`PackedArrayQ[f /# data]]
{0.131565, True}
Timing[Developer`PackedArrayQ[g /# data]]
Developer`FromPackedArray::punpack1: Unpacking array with dimensions {1000000}.
{1.95083, False}
Another addition to Brett's answer: If a list is a packed array then a ToPackedArray is very fast since this checked quite early. Also you might find this valuable:
http://library.wolfram.com/infocenter/Articles/3141/
In general for numerics stuff look for talks from Rob Knapp and/or Mark Sofroniou.
When I develop numerics codes, I write the function and then use On["Packing"] to make sure that everything is packed that needs to be packed.
Concerning Mike's answer, the threshold has been introduced since for small stuff there is overhead. Where the threshold is is hardware dependent. It might be an idea to write a function that sets these threshold based on measurements done on the computer.

Coefficient function is slow

Please consider:
Clear[x]
expr = Sum[x^i, {i, 15}]^30;
CoefficientList[expr, x]; // Timing
Coefficient[Expand#expr, x, 234]; // Timing
Coefficient[expr, x, 234]; // Timing
{0.047, Null}
{0.047, Null}
{4.93, Null}
Help states:
Coefficient works whether or not expr is explicitly given in expanded form.
Is there a reason why Coefficient needs to be so slow in the last case?
Here is a hack which may enable your code to be fast, but I don't guarantee it to always work correctly:
ClearAll[withFastCoefficient];
SetAttributes[withFastCoefficient, HoldFirst];
withFastCoefficient[code_] :=
Block[{Binomial},
Binomial[x_, y_] := 10 /; ! FreeQ[Stack[_][[-6]], Coefficient];
code]
Using it, we get:
In[58]:= withFastCoefficient[Coefficient[expr,x,234]]//Timing
Out[58]= {0.172,3116518719381876183528738595379210}
The idea is that, Coefficient is using Binomial internally to estimate the number of terms, and then expands (calls Expand) if the number of terms is less than 1000, which you can check by using Trace[..., TraceInternal->True]. And when it does not expand, it computes lots of sums of large coefficient lists dominated by zeros, and this is apparently slower than expanding, for a range of expressions. What I do is to fool Binomial into returning a small number (10), but I also tried to make it such that it will only affect the Binomial called internally by Coefficient:
In[67]:= withFastCoefficient[f[Binomial[7,4]]Coefficient[expr,x,234]]
Out[67]= 3116518719381876183528738595379210 f[35]
I can not however guarantee that there are no examples where Binomial somewhere else in the code will be computed incorrectly.
EDIT
Of course, a safer alternative that always exists is to redefine Coefficient using the Villegas - Gayley trick, expanding an expression inside it and calling it again:
Unprotect[Coefficient];
Module[{inCoefficient},
Coefficient[expr_, args__] :=
Block[{inCoefficient = True},
Coefficient[Expand[expr], args]] /; ! TrueQ[inCoefficient]
];
Protect[Coefficient];
EDIT 2
My first suggestion had an advantage that we defined a macro which modified the properties of functions locally, but disadvantage that it was unsafe. My second suggestion is safer but modifies Coefficient globally, so it will always expand until we remove that definition. We can have the best of both worlds with the help of Internal`InheritedBlock, which creates a local copy of a given function. Here is the code:
ClearAll[withExpandingCoefficient];
SetAttributes[withExpandingCoefficient, HoldFirst];
withExpandingCoefficient[code_] :=
Module[{inCoefficient},
Internal`InheritedBlock[{Coefficient},
Unprotect[Coefficient];
Coefficient[expr_, args__] :=
Block[{inCoefficient = True},
Coefficient[Expand[expr], args]] /; ! TrueQ[inCoefficient];
Protect[Coefficient];
code
]
];
The usage is similar to the first case:
In[92]:= withExpandingCoefficient[Coefficient[expr,x,234]]//Timing
Out[92]= {0.156,3116518719381876183528738595379210}
The main Coefficient function remains unaffected however:
In[93]:= DownValues[Coefficient]
Out[93]= {}
Coefficient will not expand unless it deems it absolutely necessary to do so. This does indeed avoid memory explosions. I believe it has been this way since version 3 (I think I was working on it around 1995 or so).
It can also be faster to avoid expansion. Here is a simple example.
In[28]:= expr = Sum[x^i + y^j + z^k, {i, 15}, {j, 10}, {k, 20}]^20;
In[29]:= Coefficient[expr, x, 234]; // Timing
Out[29]= {0.81, Null}
But this next appears to hang in version 8, and takes at least a half minute in the development Mathematica (where Expand was changed).
Coefficient[Expand[expr], x, 234]; // Timing
Possibly some heuristics should be added to look for univariates that will not explode. Does not seem like a high priority item though.
Daniel Lichtblau
expr = Sum[x^i, {i, 15}]^30;
scoeff[ex_, var_, n_] /; PolynomialQ[ex, var] :=
ex + O[var]^(n + 1) /.
Verbatim[SeriesData][_, 0, c_List, nmin_, nmax_, 1] :>
If[nmax - nmin != Length[c], 0, c[[-1]]];
Timing[scoeff[expr, x, 234]]
seems quite fast, too.
After some experimentation following Rolf Mertig's answer, this appears to be the fastest method on the type of expression such as Sum[x^i, {i, 15}]^30:
SeriesCoefficient[expr, {x, 0, 234}]

Create expression trees from given sets of numbers and operations and find those that evaluate to a target number in Mathematica 8 or above

Given a set of numbers and a set of binary operations,
what is the fastest way to create random expression trees or exhaustively check every possible combination in Mathematica?
What I am trying to solve is given:
numbers={25,50,75,100,3,6} (* each can ONLY be used ONCE *)
operators={Plus,Subtract,Times,Divide} (* each can be used repeatedly *)
target=99
find expression trees that would evaluate to target.
I have two solutions whose performances I give for the case where expression trees contain exactly 4 of the numbers and 3 operators:
random sample & choice: ~25K trees / second
exhaustive scan: 806400 trees in ~2.15 seconds
(timed on a laptop with: Intel(R) Core(TM)2 Duo CPU T9300 # 2.50GHz, 3GB ram, no parallelization used yet but would be most welcome in answers)
My notebooks are a bit messy at the moment. So I would first love to pose the question and hope for original ideas and answers while I clean up my code for sharing.
Largest possible case is where every expression tree uses up all the (6) numbers and 'Length[numbers]-1' (5) operators.
Performance of methods in the largest case is:
random sample & choice: ~21K trees / second
exhaustive scan: 23224320 trees in ~100 seconds
Also I am using Mathematica 8.0.1 so I am more than all ears if there are any ways to do it in OpenCL or using compiled functions wiht CompilationTarget->"C", etc.
OK, this is not elegant or fast, and it's buggy, but it works (sometimes). It uses a monte carlo method, implementing the metropolis algorithm for a weight function that I (arbitrarily) selected just to see if this would work. This was some time ago for a similar problem; I suppose my mathematica skills have improved as it looks ugly now, but I have no time to fix it at the moment.
Execute this (it looks more reasonable when you paste it into a notebook):
ClearAll[swap];
swap[lst_, {p1_, p2_}] :=
ReplacePart[
lst, {p1 \[Rule] lst\[LeftDoubleBracket]p2\[RightDoubleBracket],
p2 \[Rule] lst\[LeftDoubleBracket]p1\[RightDoubleBracket]}]
ClearAll[evalops];
(*first element of opslst is Identity*)
evalops[opslst_, ord_, nums_] :=
Module[{curval}, curval = First#nums;
Do[curval =
opslst\[LeftDoubleBracket]p\[RightDoubleBracket][curval,
nums\[LeftDoubleBracket]ord\[LeftDoubleBracket]p\
\[RightDoubleBracket]\[RightDoubleBracket]], {p, 2, Length#nums}];
curval]
ClearAll[randomizeOrder];
randomizeOrder[ordlst_] :=
swap[ordlst, RandomInteger[{1, Length#ordlst}, 2]]
ClearAll[randomizeOps];
(*never touch the first element*)
randomizeOps[oplst_, allowedOps_] :=
ReplacePart[
oplst, {RandomInteger[{2, Length#oplst}] \[Rule] RandomChoice[ops]}]
ClearAll[takeMCstep];
takeMCstep[goal_, opslst_, ord_, nums_, allowedops_] :=
Module[{curres, newres, newops, neword, p},
curres = evalops[opslst, ord, nums];
newops = randomizeOps[opslst, allowedops];
neword = randomizeOrder[ord];
newres = evalops[newops, neword, nums];
Switch[Abs[newres - goal],
0, {newops,
neword}, _, (p = Abs[curres - goal]/Abs[newres - goal];
If[RandomReal[] < p, {newops, neword}, {opslst, ord}])]]
then to solve your actual problem, do
ops = {Times, Plus, Subtract, Divide}
nums = {25, 50, 75, 100, 3, 6}
ord = Range[Length#nums]
(*the first element is identity to simplify the logic later*)
oplist = {Identity}~Join~RandomChoice[ops, Length#nums - 1]
out = NestList[
takeMCstep[
99, #\[LeftDoubleBracket]1\[RightDoubleBracket], #\
\[LeftDoubleBracket]2\[RightDoubleBracket], nums, ops] &, {oplist,
ord}, 10000]
and then to see that it worked,
ev = Map[evalops[#\[LeftDoubleBracket]1\[RightDoubleBracket], #\
\[LeftDoubleBracket]2\[RightDoubleBracket], nums] &, out];
ev // Last // N
ev // ListPlot[#, PlotMarkers \[Rule] None] &
giving
thus, it obtained the correct order of operators and numbers after around 2000 tries.
As I said, it's ugly, inefficient, and badly programmed as it was a quick-and-dirty adaptation of a quick-and-dirty hack. If you're interested I can clean up and explain the code.
This was a fun question. Here's my full solution:
ExprEval[nums_, ops_] := Fold[
#2[[1]][#1, #2[[2]]] &,
First#nums,
Transpose[{ops, Rest#nums}]]
SymbolicEval[nums_, ops_] := ExprEval[nums, ToString /# ops]
GetExpression[nums_, ops_, target_] := Select[
Tuples[ops, Length#nums - 1],
(target == ExprEval[nums, #]) &]
Usage example:
nums = {-1, 1, 2, 3};
ops = {Plus, Subtract, Times, Divide};
solutions = GetExpression[nums, ops, 3]
ExprEval[nums, #] & /# solutions
SymbolicEval[nums, #] & /# solutions
Outputs:
{{Plus, Times, Plus}, {Plus, Divide, Plus}, {Subtract, Plus,
Plus}, {Times, Plus, Times}, {Divide, Plus, Times}}
{3, 3, 3, 3, 3}
{"Plus"["Times"["Plus"[-1, 1], 2], 3],
"Plus"["Divide"["Plus"[-1, 1], 2], 3],
"Plus"["Plus"["Subtract"[-1, 1], 2], 3],
"Times"["Plus"["Times"[-1, 1], 2], 3],
"Times"["Plus"["Divide"[-1, 1], 2], 3]}
How it works
The ExprEval function takes in the numbers and operations, and applies them using (I think) RPN:
ExprEval[{1, 2, 3}, {Plus, Times}] == (1 + 2) * 3
It does this by continually folding pairs of numbers using the appropriate operation.
Now that I have a way to evaluate an expression tree, I just needed to generate them. Using Tuples, I'm able to generate all the different operators that I would intersperse between the numbers.
Once you get all possible operations, I used Select to pick out the the ones that evaluate to the target number.
Drawbacks
The solution above is really slow. Generating all the possible tuples is exponential in time. If there are k operations and n numbers, it's on the order of O(k^n).
For n = 10, it took 6 seconds to complete on Win 7 x64, Core i7 860, 12 GB RAM. The timings of the runs match the theoretical time complexity almost exactly:
Red line is the theoretical, blue is experimental. The x-axis is size of the nums input and the y-axis is the time in seconds to enumerate all solutions.
The above solution also solves the problem using a functional programming style. It looks pretty, but the thing also sucks up a butt ton of memory since it's storing the full results at nearly every step.
It doesn't even make use of parallelization, and I'm not entirely certain how you would even parallelize the solution I produced.
Some limitations
Mr. Wizard brought to my attention that this code only solves for only particular set of solutions. Given some input such as {a, b, c, d, e, ... } it only permutes the operators in between the numbers. It doesn't permute the ordering of the numbers. If it were to permute the numbers as well, the time complexity would rise up to O(k^n * n!) where k is the number of operators and n is the length of the input number array.
The following will produce the set of solutions for any permutation of the input numbers and operators:
(* generates a lists of the form
{
{number permutation, {{op order 1}, {op order 2}, ... }
}, ...
}*)
GetAllExpressions[nums_, ops_, target_] :=
ParallelMap[{#, GetExpression[#, ops, target]} &,
Tuples[nums, Length#nums]]

Algorithm for picking pattern free downvalues from a sparse definition list

I have the following problem.
I am developing a stochastic simulator which samples configurations of the system at random and stores the statistics of how many times each configuration has been visited at certain time instances. Roughly the code works like this
f[_Integer][{_Integer..}] :=0
...
someplace later in the code, e.g.,
index = get index;
c = get random configuration (i.e. a tuple of integers, say a pair {n1, n2});
f[index][c] = f[index][c] + 1;
which tags that configuration c has occurred once more in the simulation at time instance index.
Once the code has finished there is a list of definitions for f that looks something like this (I typed it by hand just to emphasize the most important parts)
?f
f[1][{1, 2}] = 112
f[1][{3, 4}] = 114
f[2][{1, 6}] = 216
f[2][{2, 7}] = 227
...
f[index][someconfiguration] = some value
...
f[_Integer][{_Integer..}] :=0
Please note that pattern free definitions that come first can be rather sparse. Also one cannot know which values and configurations will be picked.
The problem is to efficiently extract down values for a desired index, for example issue something like
result = ExtractConfigurationsAndOccurences[f, 2]
which should give a list with the structure
result = {list1, list2}
where
list1 = {{1, 6}, {2, 7}} (* the list of configurations that occurred during the simulation*)
list2 = {216, 227} (* how many times each of them occurred *)
The problem is that ExtractConfigurationsAndOccurences should be very fast. The only solution I could come up with was to use SubValues[f] (which gives the full list) and filter it with Cases statement. I realize that this procedure should be avoided at any cost since there will be exponentially many configurations (definitions) to test, which slows down the code considerably.
Is there a natural way in Mathematica to do this in a fast way?
I was hoping that Mathematica would see f[2] as a single head with many down values but using DownValues[f[2]] gives nothing. Also using SubValues[f[2]] results in an error.
This is a complete rewrite of my previous answer. It turns out that in my previous attempts, I overlooked a much simpler method based on a combination of packed arrays and sparse arrays, that is much faster and more memory - efficient than all previous methods (at least in the range of sample sizes where I tested it), while only minimally changing the original SubValues - based approach. Since the question was asked about the most efficient method, I will remove the other ones from the answer (given that they are quite a bit more complex and take a lot of space. Those who would like to see them can inspect past revisions of this answer).
The original SubValues - based approach
We start by introducing a function to generate the test samples of configurations for us. Here it is:
Clear[generateConfigurations];
generateConfigurations[maxIndex_Integer, maxConfX_Integer, maxConfY_Integer,
nconfs_Integer] :=
Transpose[{
RandomInteger[{1, maxIndex}, nconfs],
Transpose[{
RandomInteger[{1, maxConfX}, nconfs],
RandomInteger[{1, maxConfY}, nconfs]
}]}];
We can generate a small sample to illustrate:
In[3]:= sample = generateConfigurations[2,2,2,10]
Out[3]= {{2,{2,1}},{2,{1,1}},{1,{2,1}},{1,{1,2}},{1,{1,2}},
{1,{2,1}},{2,{1,2}},{2,{2,2}},{1,{2,2}},{1,{2,1}}}
We have here only 2 indices, and configurations where both "x" and "y" numbers vary from 1 to 2 only - 10 such configurations.
The following function will help us imitate the accumulation of frequencies for configurations, as we increment SubValues-based counters for repeatedly occurring ones:
Clear[testAccumulate];
testAccumulate[ff_Symbol, data_] :=
Module[{},
ClearAll[ff];
ff[_][_] = 0;
Do[
doSomeStuff;
ff[#1][#2]++ & ## elem;
doSomeMoreStaff;
, {elem, data}]];
The doSomeStuff and doSomeMoreStaff symbols are here to represent some code that might preclude or follow the counting code. The data parameter is supposed to be a list of the form produced by generateConfigurations. For example:
In[6]:=
testAccumulate[ff,sample];
SubValues[ff]
Out[7]= {HoldPattern[ff[1][{1,2}]]:>2,HoldPattern[ff[1][{2,1}]]:>3,
HoldPattern[ff[1][{2,2}]]:>1,HoldPattern[ff[2][{1,1}]]:>1,
HoldPattern[ff[2][{1,2}]]:>1,HoldPattern[ff[2][{2,1}]]:>1,
HoldPattern[ff[2][{2,2}]]:>1,HoldPattern[ff[_][_]]:>0}
The following function will extract the resulting data (indices, configurations and their frequencies) from the list of SubValues:
Clear[getResultingData];
getResultingData[f_Symbol] :=
Transpose[{#[[All, 1, 1, 0, 1]], #[[All, 1, 1, 1]], #[[All, 2]]}] &#
Most#SubValues[f, Sort -> False];
For example:
In[10]:= result = getResultingData[ff]
Out[10]= {{2,{2,1},1},{2,{1,1},1},{1,{2,1},3},{1,{1,2},2},{2,{1,2},1},
{2,{2,2},1},{1,{2,2},1}}
To finish with the data-processing cycle, here is a straightforward function to extract data for a fixed index, based on Select:
Clear[getResultsForFixedIndex];
getResultsForFixedIndex[data_, index_] :=
If[# === {}, {}, Transpose[#]] &[
Select[data, First## == index &][[All, {2, 3}]]];
For our test example,
In[13]:= getResultsForFixedIndex[result,1]
Out[13]= {{{2,1},{1,2},{2,2}},{3,2,1}}
This is presumably close to what #zorank tried, in code.
A faster solution based on packed arrays and sparse arrays
As #zorank noted, this becomes slow for larger sample with more indices and configurations. We will now generate a large sample to illustrate that (note! This requires about 4-5 Gb of RAM, so you may want to reduce the number of configurations if this exceeds the available RAM):
In[14]:=
largeSample = generateConfigurations[20,500,500,5000000];
testAccumulate[ff,largeSample];//Timing
Out[15]= {31.89,Null}
We will now extract the full data from the SubValues of ff:
In[16]:= (largeres = getResultingData[ff]); // Timing
Out[16]= {10.844, Null}
This takes some time, but one has to do this only once. But when we start extracting data for a fixed index, we see that it is quite slow:
In[24]:= getResultsForFixedIndex[largeres,10]//Short//Timing
Out[24]= {2.687,{{{196,26},{53,36},{360,43},{104,144},<<157674>>,{31,305},{240,291},
{256,38},{352,469}},{<<1>>}}}
The main idea we will use here to speed it up is to pack individual lists inside the largeres, those for indices, combinations and frequencies. While the full list can not be packed, those parts individually can:
In[18]:= Timing[
subIndicesPacked = Developer`ToPackedArray[largeres[[All,1]]];
subCombsPacked = Developer`ToPackedArray[largeres[[All,2]]];
subFreqsPacked = Developer`ToPackedArray[largeres[[All,3]]];
]
Out[18]= {1.672,Null}
This also takes some time, but it is a one-time operation again.
The following functions will then be used to extract the results for a fixed index much more efficiently:
Clear[extractPositionFromSparseArray];
extractPositionFromSparseArray[HoldPattern[SparseArray[u___]]] := {u}[[4, 2, 2]]
Clear[getCombinationsAndFrequenciesForIndex];
getCombinationsAndFrequenciesForIndex[packedIndices_, packedCombs_,
packedFreqs_, index_Integer] :=
With[{positions =
extractPositionFromSparseArray[
SparseArray[1 - Unitize[packedIndices - index]]]},
{Extract[packedCombs, positions],Extract[packedFreqs, positions]}];
Now, we have:
In[25]:=
getCombinationsAndFrequenciesForIndex[subIndicesPacked,subCombsPacked,subFreqsPacked,10]
//Short//Timing
Out[25]= {0.094,{{{196,26},{53,36},{360,43},{104,144},<<157674>>,{31,305},{240,291},
{256,38},{352,469}},{<<1>>}}}
We get a 30 times speed-up w.r.t. the naive Select approach.
Some notes on complexity
Note that the second solution is faster because it uses optimized data structures, but its complexity is the same as that of Select- based one, which is, linear in the length of total list of unique combinations for all indices. Therefore, in theory, the previously - discussed solutions based on nested hash-table etc may be asymptotically better. The problem is, that in practice we will probably hit the memory limitations long before that. For the 10 million configurations sample, the above code was still 2-3 times faster than the fastest solution I posted before.
EDIT
The following modification:
Clear[getCombinationsAndFrequenciesForIndex];
getCombinationsAndFrequenciesForIndex[packedIndices_, packedCombs_,
packedFreqs_, index_Integer] :=
With[{positions =
extractPositionFromSparseArray[
SparseArray[Unitize[packedIndices - index], Automatic, 1]]},
{Extract[packedCombs, positions], Extract[packedFreqs, positions]}];
makes the code twice faster still. Moreover, for more sparse indices (say, calling the sample-generation function with parameters like generateConfigurations[2000, 500, 500, 5000000] ), the speed-up with respect to the Select- based function is about 100 times.
I'd probably use SparseArrays here (see update below), but if you insist on using functions and *Values to store and retrieve values an approach would be to have the first part (f[2] etc.) replaced by a symbol you create on the fly like:
Table[Symbol["f" <> IntegerString[i, 10, 3]], {i, 11}]
(* ==> {f001, f002, f003, f004, f005, f006, f007, f008, f009, f010, f011} *)
Symbol["f" <> IntegerString[56, 10, 3]]
(* ==> f056 *)
Symbol["f" <> IntegerString[56, 10, 3]][{3, 4}] = 12;
Symbol["f" <> IntegerString[56, 10, 3]][{23, 18}] = 12;
Symbol["f" <> IntegerString[56, 10, 3]] // Evaluate // DownValues
(* ==> {HoldPattern[f056[{3, 4}]] :> 12, HoldPattern[f056[{23, 18}]] :> 12} *)
f056 // DownValues
(* ==> {HoldPattern[f056[{3, 4}]] :> 12, HoldPattern[f056[{23, 18}]] :> 12} *)
Personally I prefer Leonid's solution, as it's much more elegant but YMMV.
Update
On OP's request, about using SparseArrays:
Large SparseArrays take up a fraction of the size of standard nested lists. We can make f to be a large (100,000 entires) sparse array of sparse arrays:
f = SparseArray[{_} -> 0, 100000];
f // ByteCount
(* ==> 672 *)
(* initialize f with sparse arrays, takes a few seconds with f this large *)
Do[ f[[i]] = SparseArray[{_} -> 0, {100, 110}], {i,100000}] // Timing//First
(* ==> 18.923 *)
(* this takes about 2.5% of the memory that a normal array would take: *)
f // ByteCount
(* ==> 108000040 *)
ConstantArray[0, {100000, 100, 100}] // ByteCount
(* ==> 4000000176 *)
(* counting phase *)
f[[1]][[1, 2]]++;
f[[1]][[1, 2]]++;
f[[1]][[42, 64]]++;
f[[2]][[100, 11]]++;
(* reporting phase *)
f[[1]] // ArrayRules
f[[2]] // ArrayRules
f // ArrayRules
(*
==>{{1, 2} -> 2, {42, 64} -> 1, {_, _} -> 0}
==>{{100, 11} -> 1, {_, _} -> 0}
==>{{1, 1, 2} -> 2, {1, 42, 64} -> 1, {2, 100, 11} -> 1, {_, _, _} -> 0}
*)
As you can see, ArrayRules makes a nice list with contributions and counts. This can be done for each f[i] separately or the whole bunch together (last line).
In some scenarios (depending upon the performance needed to generate the values), the following easy solution using an auxiliary list (f[i,0]) may be useful:
f[_Integer][{_Integer ..}] := 0;
f[_Integer, 0] := Sequence ## {};
Table[
r = RandomInteger[1000, 2];
f[h = RandomInteger[100000]][r] = RandomInteger[10];
f[h, 0] = Union[f[h, 0], {r}];
, {i, 10^6}];
ExtractConfigurationsAndOccurences[f_, i_] := {f[i, 0], f[i][#] & /# f[i, 0]};
Timing#ExtractConfigurationsAndOccurences[f, 10]
Out[252]= {4.05231*10^-15, {{{172, 244}, {206, 115}, {277, 861}, {299,
862}, {316, 194}, {361, 164}, {362, 830}, {451, 306}, {614,
769}, {882, 159}}, {5, 2, 1, 5, 4, 10, 4, 4, 1, 8}}}
Many thanks for everyone on the help provided. I've been thinking a lot about everybody's input and I believe that in the simulation setup the following is the optimal solution:
SetAttributes[linkedList, HoldAllComplete];
temporarySymbols = linkedList[];
SetAttributes[bookmarkSymbol, Listable];
bookmarkSymbol[symbol_]:=
With[{old = temporarySymbols}, temporarySymbols= linkedList[old,symbol]];
registerConfiguration[index_]:=registerConfiguration[index]=
Module[
{
cs = linkedList[],
bookmarkConfiguration,
accumulator
},
(* remember the symbols we generate so we can remove them later *)
bookmarkSymbol[{cs,bookmarkConfiguration,accumulator}];
getCs[index] := List ## Flatten[cs, Infinity, linkedList];
getCsAndFreqs[index] := {getCs[index],accumulator /# getCs[index]};
accumulator[_]=0;
bookmarkConfiguration[c_]:=bookmarkConfiguration[c]=
With[{oldCs=cs}, cs = linkedList[oldCs, c]];
Function[c,
bookmarkConfiguration[c];
accumulator[c]++;
]
]
pattern = Verbatim[RuleDelayed][Verbatim[HoldPattern][HoldPattern[registerConfiguration [_Integer]]],_];
clearSimulationData :=
Block[{symbols},
DownValues[registerConfiguration]=DeleteCases[DownValues[registerConfiguration],pattern];
symbols = List ## Flatten[temporarySymbols, Infinity, linkedList];
(*Print["symbols to purge: ", symbols];*)
ClearAll /# symbols;
temporarySymbols = linkedList[];
]
It is based on Leonid's solution from one of previous posts, appended with belsairus' suggestion to include extra indexing for configurations that have been processed. Previous approaches are adapted so that configurations can be naturally registered and extracted using the same code more or less. This is hitting two flies at once since bookkeeping and retrieval and strongly interrelated.
This approach will work better in the situation when one wants to add simulation data incrementally (all curves are normally noisy so one has to add runs incrementally to obtain good plots). The sparse array approach will work better when data are generated in one go and then analyzed, but I do not remember being personally in such a situation where I had to do that.
Also, I was rather naive thinking that the data extraction and generation could be treated separately. In this particular case it seems one should have both perspectives in mind. I profoundly apologise for bluntly dismissing any previous suggestions in this direction (there were few implicit ones).
There are some open/minor problems that I do not know how to handle, e.g. when clearing the symbols I cannot clear headers like accumulator$164, I can only clean subvalues associated with it. Have not clue why. Also, if With[{oldCs=cs}, cs = linkedList[oldCs, c]]; is changed into something like cs = linkedList[cs, c]]; configurations are not stored. Have no clue either why the second option does not work. But these minor problems are well defined satellite issues that one can address in the future. By and large the problem seems solved by the generous help from all involved.
Many thanks again for all the help.
Regards
Zoran
p.s. There are some timings, but to understand what is going on I will append the code that is used for benchmarking. In brief, idea is to generate lists of configurations and just Map through them by invoking registerConfiguration. This essentially simulates data generation process. Here is the code used for testing:
fillSimulationData[sampleArg_] :=MapIndexed[registerConfiguration[#2[[1]]][#1]&, sampleArg,{2}];
sampleForIndex[index_]:=
Block[{nsamples,min,max},
min = Max[1,Floor[(9/10)maxSamplesPerIndex]];
max = maxSamplesPerIndex;
nsamples = RandomInteger[{min, max}];
RandomInteger[{1,10},{nsamples,ntypes}]
];
generateSample :=
Table[sampleForIndex[index],{index, 1, nindexes}];
measureGetCsTime :=((First # Timing[getCs[#]])& /# Range[1, nindexes]) // Max
measureGetCsAndFreqsTime:=((First # Timing[getCsAndFreqs[#]])& /# Range[1, nindexes]) // Max
reportSampleLength[sampleArg_] := StringForm["Total number of confs = ``, smallest accumulator length ``, largest accumulator length = ``", Sequence## {Total[#],Min[#],Max[#]}& [Length /# sampleArg]]
The first example is relatively modest:
clearSimulationData;
nindexes=100;maxSamplesPerIndex = 1000; ntypes = 2;
largeSample1 = generateSample;
reportSampleLength[largeSample1];
Total number of confs = 94891, smallest accumulator length 900, largest accumulator length = 1000;
First # Timing # fillSimulationData[largeSample1]
gives 1.375 secs which is fast I think.
With[{times = Table[measureGetCsTime, {50}]},
ListPlot[times, Joined -> True, PlotRange -> {0, Max[times]}]]
gives times around 0.016 secs, and
With[{times = Table[measureGetCsAndFreqsTime, {50}]},
ListPlot[times, Joined -> True, PlotRange -> {0, Max[times]}]]
gives same times. Now the real killer
nindexes = 10; maxSamplesPerIndex = 100000; ntypes = 10;
largeSample3 = generateSample;
largeSample3 // Short
{{{2,2,1,5,1,3,7,9,8,2},92061,{3,8,6,4,9,9,7,8,7,2}},8,{{4,10,1,5,9,8,8,10,8,6},95498,{3,8,8}}}
reported as
Total number of confs = 933590, smallest accumulator length 90760, largest accumulator length = 96876
gives generation times of ca 1.969 - 2.016 secs which is unbeliavably fast. I mean this is like going through the gigantic list of ca one million elements and applying a function to each element.
The extraction times for configs and {configs, freqs} are roughly 0.015 and 0.03 secs respectivelly.
To me this is a mind blowing speed I would never expect from Mathematica!

Changing the Diagonals of a Matrix with Mathematica

Is there an elegant way to change the diagonals of a matrix to a new list of values, the
equivalent of Band with SparseArray?
Say I have the following matrix (see below)
(mat = Array[Subscript[a, ##] &, {4, 4}]) // MatrixForm
and I'd like to change the main diagonal to the following to get "new mat" (see below)
newMainDiagList = Flatten#Array[Subscript[new, ##] &, {1, 4}]
I know it is easy to change the main diagonal to a given value using ReplacePart. For example:
ReplacePart[mat, {i_, i_} -> 0]
I'd also like not to be restricted to the main diagonal (in the same way that Band is not so restricted with SparseArray)
(The method I use at the moment is the following!)
(Normal#SparseArray[Band[{1, 1}] -> newMainDiagList] +
ReplacePart[mat, {i_, i_} -> 0]) // MatrixForm
(Desired Output is 'new mat')
Actually, you don't need to use Normal whatsoever. A SparseArray plus a "normal" matrix gives you a "normal" matrix. Using Band is, on initial inspection, the most flexible approach, but an effective (and slightly less flexible) alternative is:
DiagonalMatrix[newDiagList] + ReplacePart[mat, {i_,i_}->0]
DiagonalMatrix also accepts a second integer parameter which allows you to specify which diagonal that newDiagList represents with the main diagonal represented by 0.
The most elegant alternative, however, is to use ReplacePart a little more effectively: the replacement Rule can be a RuleDelayed, e.g.
ReplacePart[mat, {i_,i_} :> newDiagList[[i]] ]
which does your replacement directly without the intermediate steps.
Edit: to mimic Band's behavior, we can also add conditions to the pattern via /;. For instance,
ReplacePart[mat, {i_,j_} /; j==i+1 :> newDiagList[[i]]
replaces the diagonal immediately above the main one (Band[{1,2}]), and
ReplacePart[mat, {i_,i_} /; i>2 :> newDiagList[[i]]
would only replace the last two elements of the main diagonal in a 4x4 matrix (Band[{3,3}]). But, it is much simpler using ReplacePart directly.

Resources