Tree data structure in Mathematica - wolfram-mathematica

I have used mathematica mostly as a mathematics workbench and for writing relatively small ad-hoc programs. I am however designing a system which I intend to program in Mathematica. I need to store data in a tree, and search and traverse the tree. Although I know how to implement a tree, I prefer standard, tested code. I looked at what sort of packages there are for basic datastructures at the Mathematica users wiki. I have found none, although there is a small example in the Mathematica documentation.
Now to my question(s):
Is there an ( open source ) package for data structures available somewhere?
What approach have you used with regard to data structures? Gradually developing your own util package?
( Not a question, just a remark. Maybe... the lack of ( lots of available ) open source packages is the reason why Mathematica doesn't have the momentum it deserves. A chicken / egg issue, I am afraid. )

In Mathematica, most of what you do is based on expressions. Expressions naturally have the tree structure. For depth-first traversals (which are probably most common), you can then use functions like Scan,Map, Cases etc. The difference w.r.t the more traditional languages is that there is no simple way to preserve the identity of individual node in an expression tree, since there are no pointers in Mathematica. Also, many operations on expressions that are idiomatic in Mathematica would copy the entire expression when you only need to modify it in a few places, because expressions are immutable.
Using immutable Mathematica expressions as trees still has several advantages. One is that, because they are immutable, it is easy to understand what they store by just looking at them (state and behavior are not mixed). Another is that there are efficient and generic functions such as Map, MapIndexed or Scan, that traverse them. For example, the visitor design pattern is invisible - it is just Map[f,tree,Infinity], built into the langauge. Also, there are built-in functions such as Cases, Replace, ReplaceAll, etc, which allow one to write very concise and declarative code to destructure trees, find pieces of trees with certain syntax or satisfying some condition, etc. Since trees are not restricted to only be built from lists and be built from different heads, one can effectively use this to write very concise tree-processing code. Finally, a freedom to very easily build any tree structure you want makes it much easier to perform experiments and prototype things, in the spirit of exploratory and bottom-up programming, which shortens the development cycle and ultimately leads to better designs.
That said, you can certainly implement "stateful" (mutable) tree data structure. The real reason it has not been done yet generally is, I suspect, the performance hit associated with building, modifying and traversing such a tree, since it will undergo a full symbolic evaluation process at every step (see this post for more details on that). For 2 examples of how, for example, a binary search tree can be used in Mathematica context for pretty efficient code, see my posts here (generic symbolic setting) and here (in the context of Compiled code). For general ways to construct data structures idiomatically in Mathematica, I recommend books of Roman Maeder: "Programming in Mathematica", "Mathematica programmer I&II", and especially "Computer Science in Mathematica". In the latter he has a detailed discussion of how to implement binary search tree in Mathematica. EDIT As #Simon mentioned, the talk of #Daniel Lichtblau is also a great resource, which shows how to build data structures and make them efficient.
Regarding general ways to implement data structures in Mathematica which would incorporate some state, here is a simple example extracted from my post in this Mathgroup thread - it implements a "pair" data structure.
Unprotect[pair, setFirst, getFirst, setSecond, getSecond, new, delete];
ClearAll[pair, setFirst, getFirst, setSecond, getSecond, new, delete];
Module[{first, second},
first[_] := {};
second[_] := {};
pair /: new[pair[]] := pair[Unique[]];
pair /: pair[tag_].delete[] := (first[tag] =.; second[tag] =.);
pair /: pair[tag_].setFirst[value_] := first[tag] = value;
pair /: pair[tag_].getFirst[] := first[tag];
pair /: pair[tag_].setSecond[value_] := second[tag] = value;
pair /: pair[tag_].getSecond[] := second[tag];
Format[pair[x_Symbol]] := "pair[" <> ToString[Hash[x]] <> "]";
];
Protect[pair, setFirst, getFirst, setSecond, getSecond, new, delete];
Here is how you could use it:
pr = new[pair[]];
pr.setFirst[10];
pr.setSecond[20];
{pr.getFirst[], pr.getSecond[]}
{10, 20}
Creating a list of new pair objects :
pairs = Table[new[pair[]], {10}]
{"pair[430427975]", "pair[430428059]", "pair[430428060]", "pair[430428057]",
"pair[430428058]", "pair[430428063]", "pair[430428064]", "pair[430428061]",
"pair[430428062]", "pair[430428051]"}
Setting the fields :
Module[{i},
For[i = 1, i <= 10, i++,
pairs[[i]].setFirst[10*i];
pairs[[i]].setSecond[20*i];]]
Checking the fields :
#.getFirst[] & /# pairs
{10, 20, 30, 40, 50, 60, 70, 80, 90, 100}
#.getSecond[] & /# pairs
{20, 40, 60, 80, 100, 120, 140, 160, 180, 200}
In the post I mentioned there is a more detailed discussion. One big problem for "objects" created in this way is that there is no automatic garbage collection for them, which may be one of the major reasons why OOP extensions implemented in top-level Mathematica itself did not really take off.
There are several OOP extensions for Mathematica, such as the classes.m package by Roman Maeder (the source is in his "Mathematica Programmer" book), the Objectica commercial package, and several others. But until Mathematica itself would provide efficient mechanisms (perhaps based on some kind of pointer or reference mechanism) for building mutable data structures (if this ever happens), there will probably be a large performance hit associated with top-level implementations of such data structures in mma. Also, since mma is based on immutability as one of the core ideas, it is not very easy to make mutable data structures fit well with other idioms of Mathematica programming.
EDIT
Here is a bare-bones stateful tree implementation along the lines of the example above:
Module[{parent, children, value},
children[_] := {};
value[_] := Null;
node /: new[node[]] := node[Unique[]];
node /: node[tag_].getChildren[] := children[tag];
node /: node[tag_].addChild[child_node, index_] :=
children[tag] = Insert[children[tag], child, index];
node /: node[tag_].removeChild[index_] :=
children[tag] = Delete[children[tag], index];
node /: node[tag_].getChild[index_] := children[tag][[index]];
node /: node[tag_].getValue[] := value[tag];
node /: node[tag_].setValue[val_] := value[tag] = val;
];
Some examples of use:
In[68]:= root = new[node[]]
Out[68]= node[$7]
In[69]:= root.addChild[new[node[]], 1]
Out[69]= {node[$8]}
In[70]:= root.addChild[new[node[]], 2]
Out[70]= {node[$8], node[$9]}
In[71]:= root.getChild[1].addChild[new[node[]], 1]
Out[71]= {node[$10]}
In[72]:= root.getChild[1].getChild[1].setValue[10]
Out[72]= 10
In[73]:= root.getChild[1].getChild[1].getValue[]
Out[73]= 10
For one non-trivial example of use of this mutable tree data structure, see this post of mine. It also confronts this method with the one more heavily reusing Mathematica native data structures and functions, and illustrates well the points discussed at the beginning of this post.

I have used mathematica mostly as a mathematics workbench and for writing relatively small ad-hoc programs.
Mathematica really excels at this.
What approach have you used with regard to data structures? Gradually developing your own util package?
I avoid creating my own data structures in Mathematica because it cannot handle them efficiently. Specifically, general data structures tend to be 10-1,000× slower in Mathematica than elsewhere which greatly limits their practical usefulness. For example, Mathematica is 100× slower than F# at computing the range of depths in a red-black tree.
Logic programming with lists is one example where Mathematica is typically orders of magnitude slower than other compiled languages. The following Mathematica program uses linked lists to solve the n-queens problem:
safe[{x0_, y0_}][{x1_, y1_}] :=
x0 != x1 && y0 != y1 && x0 - y0 != x1 - y1 && x0 + y0 != x1 + y1
filter[_, {}] := {}
filter[p_, {h_, t_}] := If[p[h], {h, filter[p, t]}, filter[p, t]]
search[n_, nqs_, qs_, {}, a_] := If[nqs == n, a + 1, a]
search[n_, nqs_, qs_, {q_, ps_}, a_] :=
search[n, nqs, qs, ps,
search[n, nqs + 1, {q, qs}, filter[safe[q], ps], a]]
ps[n_] :=
Fold[{#2, #1} &, {}, Flatten[Table[{i, j}, {i, n}, {j, n}], 1]]
solve[n_] := search[n, 0, {}, ps[n], 0]
Here is the equivalent F#:
let safe (x0, y0) (x1, y1) =
x0<>x1 && y0<>y1 && x0-y0<>x1-y1 && x0+y0<>x1+y1
let rec filter f = function
| [] -> []
| x::xs -> if f x then x::filter f xs else filter f xs
let rec search n nqs qs ps a =
match ps with
| [] -> if nqs=n then a+1 else a
| q::ps ->
search n (nqs+1) (q::qs) (filter (safe q) ps) a
|> search n nqs qs ps
let ps n =
[ for i in 1..n do
for j in 1..n do
yield i, j ]
let solve n = search n 0 [] (ps n) 0
solve 8
Solving the 8-queens problem takes 10.5s with Mathematica and 0.07s with F#. So F# is 150× faster than Mathematica in this case.
The Stack Overflow question Mathematica "linked lists" and performance gives a more extreme example. Naive translation of that Mathematica code into F# gives an equivalent program that runs between 4,000 and 200,000× faster than the Mathematica:
let rand = System.Random()
let xs = List.init 10000 (fun _ -> rand.Next 100)
Array.init 100 (fun _ ->
let t = System.Diagnostics.Stopwatch.StartNew()
ignore(List.length xs)
t.Elapsed.TotalSeconds)
Specifically, Mathematica takes 0.156s to 16s to perform a single iteration whereas the F# takes 42µs to 86µs.
If I really want to stay in Mathematica then I shoehorn everything I'm doing into Mathematica's handful of built-in data structures, e.g. Dispatch.

Related

Fastest way to count elements matching predicate

Earlier today I asked if there's an idiomatic way to count elements matching predicate function in Mathematica, as I was concerned with performance.
My initial approach for a given predicate pred was the following:
PredCount1[lst_, pred_] := Length#Select[lst, pred];
and I got a suggestion to instead use
PredCount2[lst_, pred_] := Count[lst, x_/;pred#x];
I started profiling these functions, with different lst sizes and pred functions, and added two more definitions:
PredCount3[lst_, pred_] := Count[Thread#pred#lst, True];
PredCount4[lst_, pred_] := Total[If[pred##, 1, 0] & /# lst];
My data samples were ranges between 1 and 10 million elements, and my test functions were EvenQ, #<5& and PrimeQ. The following graphs demonstrate time taken.
EvenQ
PredCount2 is slowest, 3 and 4 duke it out.
Comparison predicate: #<5&
I've selected this function, because it's close to what I need in my actual problem. Don't worry that this is a silly test function, it actually proves that the 4th function has some merit, which I actually ended up using it in my solution.
Same as EvenQ, but 3 is clearly slower than 4.
PrimeQ
This is just bizarre. Everything is flipped. I'm not suspecting caching as the culprit here, since worst values are for the function computed last.
So, what's the right (fastest) way to count the number of elements in a list, that match a given predicate function?
You are seeing the result of auto-compilation.
First, for a Listable function such as EvenQ and PrimeQ use of Thread is unnecessary:
EvenQ[{1, 2, 3}]
{False, True, False}
This also explains why PredCount3 performs well on these functions. (They are internally optimized for threading over a list.)
Now let us look at timings.
dat = RandomInteger[1*^6, 1*^6];
test = # < 5 &;
First#Timing[#[dat, test]] & /# {PredCount1, PredCount2, PredCount3, PredCount4}
{0.343, 0.437, 0.25, 0.047}
If we change a System Option to prevent auto-compilation within Map and run the test again:
SetSystemOptions["CompileOptions" -> {"MapCompileLength" -> Infinity}]
First#Timing[#[dat, test]] & /# {PredCount1, PredCount2, PredCount3, PredCount4}
{0.343, 0.452, 0.234, 0.765}
You can clearly see that without compilation PredCount4 is much slower. In short, if your test function can be compiled by Mathematica this is a good option.
Here are some other examples of fast counting using numeric functions.
The nature of the integers in the list can have a significant effect on the achievable timings. The use of Tally can improve performance if the range of the integers is constrained.
(* Count items in the list matching predicate, pred *)
PredCountID[lst_, pred_] :=
Select[Tally#lst, pred#First## &]\[Transpose] // Last // Total
(* Define the values over which to check timings *)
ranges = {100, 1000, 10000, 100000, 1000000};
sizes = {100, 1000, 10000, 100000, 1000000, 10000000,100000000};
For PrimeQ this function gives the following timings:
Showing that even in a 10^8 sized list, Primes can be counted in less than a tenth of a second if they are from the set of integers, of {0,...,100000} and below the resolution of Timing if they are within a small range such as 1 to 100.
Because the predicate only has to be applied over the set of Tally values, this approach is relatively insensitive to the exact predicate function.

On PackedArray, looking for advice for using them

I have not used PackedArray before, but just started looking at using them from reading some discussion on them here today.
What I have is lots of large size 1D and 2D matrices of all reals, and no symbolic (it is a finite difference PDE solver), and so I thought that I should take advantage of using PackedArray.
I have an initialization function where I allocate all the data/grids needed. So I went and used ToPackedArray on them. It seems a bit faster, but I need to do more performance testing to better compare speed before and after and also compare RAM usage.
But while I was looking at this, I noticed that some operations in M automatically return lists in PackedArray already, and some do not.
For example, this does not return packed array
a = Table[RandomReal[], {5}, {5}];
Developer`PackedArrayQ[a]
But this does
a = RandomReal[1, {5, 5}];
Developer`PackedArrayQ[a]
and this does
a = Table[0, {5}, {5}];
b = ListConvolve[ {{0, 1, 0}, {1, 4, 1}, {0, 1, 1}}, a, 1];
Developer`PackedArrayQ[b]
and also matrix multiplication does return result in packed array
a = Table[0, {5}, {5}];
b = a.a;
Developer`PackedArrayQ[b]
But element wise multiplication does not
b = a*a;
Developer`PackedArrayQ[b]
My question : Is there a list somewhere which documents which M commands return PackedArray vs. not? (assuming data meets the requirements, such as Real, not mixed, no symbolic, etc..)
Also, a minor question, do you think it will be better to check first if a list/matrix created is already packed before calling calling ToPackedArray on it? I would think calling ToPackedArray on list already packed will not cost anything, as the call will return right away.
thanks,
update (1)
Just wanted to mention, that just found that PackedArray symbols not allowed in a demo CDF as I got an error uploading one with one. So, had to remove all my packing code out. Since I mainly write demos, now this topic is just of an academic interest for me. But wanted to thank everyone for time and good answers.
There isn't a comprehensive list. To point out a few things:
Basic operations with packed arrays will tend to remain packed:
In[66]:= a = RandomReal[1, {5, 5}];
In[67]:= Developer`PackedArrayQ /# {a, a.a, a*a}
Out[67]= {True, True, True}
Note above that that my version (8.0.4) doesn't unpack for element-wise multiplication.
Whether a Table will result in a packed array depends on the number of elements:
In[71]:= Developer`PackedArrayQ[Table[RandomReal[], {24}, {10}]]
Out[71]= False
In[72]:= Developer`PackedArrayQ[Table[RandomReal[], {24}, {11}]]
Out[72]= True
In[73]:= Developer`PackedArrayQ[Table[RandomReal[], {25}, {10}]]
Out[73]= True
On["Packing"] will turn on messages to let you know when things unpack:
In[77]:= On["Packing"]
In[78]:= a = RandomReal[1, 10];
In[79]:= Developer`PackedArrayQ[a]
Out[79]= True
In[80]:= a[[1]] = 0 (* force unpacking due to type mismatch *)
Developer`FromPackedArray::punpack1: Unpacking array with dimensions {10}. >>
Out[80]= 0
Operations that do per-element inspection will usually unpack the array,
In[81]:= a = RandomReal[1, 10];
In[82]:= Position[a, Max[a]]
Developer`FromPackedArray::unpack: Unpacking array in call to Position. >>
Out[82]= {{4}}
There penalty for calling ToPackedArray on an already packed list is small enough that I wouldn't worry about it too much:
In[90]:= a = RandomReal[1, 10^7];
In[91]:= Timing[Do[Identity[a], {10^5}];]
Out[91]= {0.028089, Null}
In[92]:= Timing[Do[Developer`ToPackedArray[a], {10^5}];]
Out[92]= {0.043788, Null}
The frontend prefers packed to unpacked arrays, which can show up when dealing with Dynamic and Manipulate:
In[97]:= Developer`PackedArrayQ[{1}]
Out[97]= False
In[98]:= Dynamic[Developer`PackedArrayQ[{1}]]
Out[98]= True
When looking into performance, focus on cases where large lists are getting unpacked, rather than the small ones. Unless the small ones are in big loops.
This is just an addendum to Brett's answer:
SystemOptions["CompileOptions"]
will give you the lengths being used for which a function will return a packed array. So if you did need to pack a small list, as an alternative to using Developer`ToPackedArray you could temporarily set a smaller number for one of the compile options. e.g.
SetSystemOptions["CompileOptions" -> {"TableCompileLength" -> 20}]
Note also some difference between functions which to me at least doesn't seem intuitive so I generally have to test these kind of things whenever I use them rather than instinctively knowing what will work best:
f = # + 1 &;
g[x_] := x + 1;
data = RandomReal[1, 10^6];
On["Packing"]
Timing[Developer`PackedArrayQ[f /# data]]
{0.131565, True}
Timing[Developer`PackedArrayQ[g /# data]]
Developer`FromPackedArray::punpack1: Unpacking array with dimensions {1000000}.
{1.95083, False}
Another addition to Brett's answer: If a list is a packed array then a ToPackedArray is very fast since this checked quite early. Also you might find this valuable:
http://library.wolfram.com/infocenter/Articles/3141/
In general for numerics stuff look for talks from Rob Knapp and/or Mark Sofroniou.
When I develop numerics codes, I write the function and then use On["Packing"] to make sure that everything is packed that needs to be packed.
Concerning Mike's answer, the threshold has been introduced since for small stuff there is overhead. Where the threshold is is hardware dependent. It might be an idea to write a function that sets these threshold based on measurements done on the computer.

Finding runs of similar, not identical, elements in Mathematica

I have a list of numbers. I want to extract from the list runs of numbers that fall inside some band and have some minimum length. For instance, suppose the I want to operate on this list:
thisList = {-1.2, -1.8, 1.5, -0.6, -0.8, -0.1, 1.4, -0.3, -0.1, -0.7}
with band=1 and runLength=3. I would like to have
{{-0.6, -0.8, -0.1}, {-0.3, -0.1, -0.7}}
as the result. Right now I'm using
Cases[
Partition[thisList,runLength,1],
x_ /; Abs[Max[x] - Min[x]] < band
]
The main problem is that where runs overlap, I get many copies of the run. For instance, using
thisList = {-1.2, -1.8, 1.5, -0.6, -0.8, -0.1, -0.5, -0.3, -0.1, -0.7}
gives me
{{-0.6, -0.8, -0.1}, {-0.8, -0.1, -0.5}, {-0.1, -0.5, -0.3}, {-0.5, -0.3, -0.1}, {-0.3, -0.1, -0.7}}
where I would rather have
{-0.6, -0.8, -0.1, -0.5, -0.3, -0.1, -0.7}
without doing some hokey reduction of the overlapping result. What's the proper way? It'd be nice if it didn't involve exploding the data using Partition.
EDIT
Apparenty, my first solution has at least two serious flaws: it is dead slow and completely impractical for lists larger than 100 elements, and it contains some bug(s) which I wasn't able to identify yet - it is missing some bands sometimes. So, I will provide two (hopefuly correct) and much more efficient alternatives, and I provide the flawed one below for any one interested.
A solution based on linked lists
Here is a solution based on linked lists. It allows us to still use patterns but avoid inefficiencies caused by patterns containing __ or ___ (when repeatedly applied):
ClearAll[toLinkedList];
toLinkedList[x_List] := Fold[{#2, #1} &, {}, Reverse#x]
ClearAll[accumF];
accumF[llFull_List, acc_List, {h_, t_List}, ctr_, max_, min_, band_, rLen_] :=
With[{cmax = Max[max, h], cmin = Min[min, h]},
accumF[llFull, {acc, h}, t, ctr + 1, cmax, cmin, band, rLen] /;
Abs[cmax - cmin] < band];
accumF[llFull_List, acc_List, ll : {h_, _List}, ctr_, _, _, band_, rLen_] /; ctr >= rLen :=
accumF[ll, (Sow[acc]; {}), ll, 0, h, h, band, rLen];
accumF[llFull : {h_, t : {_, _List}}, _List, ll : {head_, _List}, _, _, _, band_, rLen_] :=
accumF[t, {}, t, 0, First#t, First#t, band, rLen];
accumF[llFull_List, acc_List, {}, ctr_, _, _, _, rLen_] /; ctr >= rLen := Sow[acc];
ClearAll[getBandsLL];
getBandsLL[lst_List, runLength_Integer, band_?NumericQ] :=
Block[{$IterationLimit = Infinity},
With[{ll = toLinkedList#lst},
Map[Flatten,
If[# === {}, #, First##] &#
Reap[
accumF[ll, {}, ll, 0, First#ll, First#ll, band,runLength]
][[2]]
]
]
];
Here are examples of use:
In[246]:= getBandsLL[{-1.2,-1.8,1.5,-0.6,-0.8,-0.1,1.4,-0.3,-0.1,-0.7},3,1]
Out[246]= {{-0.6,-0.8,-0.1},{-0.3,-0.1,-0.7}}
In[247]:= getBandsLL[{-1.2,-1.8,1.5,-0.6,-0.8,-0.1,-0.5,-0.3,-0.1,-0.7},3,1]
Out[247]= {{-0.6,-0.8,-0.1,-0.5,-0.3,-0.1,-0.7}}
The main idea of the function accumF is to traverse the number list (converted to a linked list prior to that), and accumulate a band in another linked list, which is passed to it as a second argument. Once the band condition fails, the accumulated band is memorized using Sow (if it was long enough), and the process starts over with the remaining part of the linked list. The ctr parameter may not be needed if we would choose to use Depth[acc] instead.
There are a few non-obvious things present in the above code. One subtle point is that an attempt to join the two middle rules for accumF into a single rule (they look very similar) and use CompoundExpression (something like (If[ctr>=rLen, Sow[acc];accumF[...])) on the r.h.s. would lead to a non-tail-recursive accumF (See this answer for a more detailed discussion of this issue. This is also why I make the (Sow[acc]; {}) line inside a function call - to avoid the top-level CompoundExpression on the r.h.s.). Another subtle point is that I have to maintain a copy of the linked list containing the remaining elements right after the last successful match was found, since in the case of unsuccessful sequence I need to roll back to that list minus its first element, and start over. This linked list is stored in the first argument of accumF.
Note that passing large linked lists does not cost much, since what is copied is only a first element (head) and a pointer to the rest (tail). This is the main reason why using linked lists vastly improves performance, as compared to the case of patterns like {___,x__,right___} - because in the latter case, a full sequences x or right are copied. With linked lists, we effectively only copy a few references, and therefore our algorithms behave roughly as we expect (linearly in the length of the data list here). In this answer, I also mentioned the use of linked lists in such cases as one of the techniques to optimize code (section 3.4).
Compile - based solution
Here is a straightforward but not too elegant function based on Compile, which finds a list of starting and ending bands positions in the list:
bandPositions =
Compile[{{lst, _Real, 1}, {runLength, _Integer}, {band, _Real}},
Module[{i = 1, j, currentMin, currentMax,
startEndPos = Table[{0, 0}, {Length[lst]}], ctr = 0},
For[i = 1, i <= Length[lst], i++,
currentMin = currentMax = lst[[i]];
For[j = i + 1, j <= Length[lst], j++,
If[lst[[j]] < currentMin,
currentMin = lst[[j]],
(* else *)
If[lst[[j]] > currentMax,
currentMax = lst[[j]]
]
];
If[Abs[currentMax - currentMin] >= band ,
If[ j - i >= runLength,
startEndPos[[++ctr]] = {i, j - 1}; i = j - 1
];
Break[],
(* else *)
If[j == Length[lst] && j - i >= runLength - 1,
startEndPos[[++ctr]] = {i, j}; i = Length[lst];
Break[];
];
]
]; (* inner For *)
]; (* outer For *)
Take[startEndPos, ctr]], CompilationTarget -> "C"];
This is used in the final function:
getBandsC[lst_List, runLength_Integer, band_?NumericQ] :=
Map[Take[lst, #] &, bandPositions[lst, runLength, band]]
Examples of use:
In[305]:= getBandsC[{-1.2,-1.8,1.5,-0.6,-0.8,-0.1,1.4,-0.3,-0.1,-0.7},3,1]
Out[305]= {{-0.6,-0.8,-0.1},{-0.3,-0.1,-0.7}}
In[306]:= getBandsC[{-1.2,-1.8,1.5,-0.6,-0.8,-0.1,-0.5,-0.3,-0.1,-0.7},3,1]
Out[306]= {{-0.6,-0.8,-0.1,-0.5,-0.3,-0.1,-0.7}}
Benchmarks
In[381]:=
largeTest = RandomReal[{-5,5},50000];
(res1 =getBandsLL[largeTest,3,1]);//Timing
(res2 =getBandsC[largeTest,3,1]);//Timing
res1==res2
Out[382]= {1.109,Null}
Out[383]= {0.016,Null}
Out[384]= True
Obviously, if one wants performance, Compile wins hands down. My observations for larger lists are that both solutions have approximately linear complexity with the size of the number list (as they should), with compiled one roughly 150 times faster on my machine than the one based on linked lists.
Remarks
In fact, both methods encode the same algorithm, although this may not be obvious. The one with recursion and patterns is arguably somewhat more understandable, but that is a matter of opinion.
A simple, but slow and buggy version
Here is the original code that I wrote first to solve this problem. This is based on a rather straightforward use of patterns and repeated rule application. As mentioned, one disadvantage of this method is its very bad performance. This is actually another case against using constructs like {___,x__,y___} in conjunction with repeated rule application, for anything longer than a few dozens elements. In the mentioned recommendations for code optimization techniques, this corresponds to the section 4.1.
Anyways, here is the code:
If[# === {}, #, First##] &#
Reap[thisList //. {
left___,
Longest[x__] /;Length[{x}] >= runLength && Abs[Max[{x}] - Min[{x}]] < band,
right___} :> (Sow[{x}]; {right})][[2]]
It works correctly for both of the original small test lists. It also looks generally correct, but for larger lists it often misses some bands, which can be seen by comparison with the other two methods. I wasn't so far able to localize the bug, since the code seems pretty transparent.
I'd try this instead:
thisList /. {___, Longest[a : Repeated[_, {3, Infinity}]], ___} :>
{a} /; Abs[Max#{a} - Min#{a}] < 1
where Repeated[_, {3, Infinity}] guarantees that you get at least 3 terms, and Longest ensures it gives you the longest run possible. As a function,
Clear[f]
f[list_List, band_, minlen_Integer?Positive] := f[list, band, minlen, Infinity]
f[list_List, band_,
minlen_Integer?Positive, maxlen_?Positive] /; maxlen >= minlen :=
list /. {___, Longest[a : Repeated[_, {minlen, maxlen}]], ___} :> {a} /;
Abs[Max#{a} - Min#{a}] < band
Very complex answers given. :-) I think I have a simpler approach for you. Define to yourself what similarity means to you, and use GatherBy[] to collect all similar elements, or SplitBy[] to collect "runs" of similar elements (then remove "runs" of minimal unaccepted length, say 1 or 2, via DeleteCases[]).
Harder question is establishing similarity. By your method 1.2,0.9,1.9,0.8 would group first three elements, but not last three, but 0.9 and 0.8 are just as similar, and 1.9 would kick it out of your band. What about .4,.5,.6,.7,.8,.9,1.0,1.1,1.2,1.3,1.4,1.5 - where does similarity end?
Also look into ClusteringComponents[] and FindClusters[]

Coefficient function is slow

Please consider:
Clear[x]
expr = Sum[x^i, {i, 15}]^30;
CoefficientList[expr, x]; // Timing
Coefficient[Expand#expr, x, 234]; // Timing
Coefficient[expr, x, 234]; // Timing
{0.047, Null}
{0.047, Null}
{4.93, Null}
Help states:
Coefficient works whether or not expr is explicitly given in expanded form.
Is there a reason why Coefficient needs to be so slow in the last case?
Here is a hack which may enable your code to be fast, but I don't guarantee it to always work correctly:
ClearAll[withFastCoefficient];
SetAttributes[withFastCoefficient, HoldFirst];
withFastCoefficient[code_] :=
Block[{Binomial},
Binomial[x_, y_] := 10 /; ! FreeQ[Stack[_][[-6]], Coefficient];
code]
Using it, we get:
In[58]:= withFastCoefficient[Coefficient[expr,x,234]]//Timing
Out[58]= {0.172,3116518719381876183528738595379210}
The idea is that, Coefficient is using Binomial internally to estimate the number of terms, and then expands (calls Expand) if the number of terms is less than 1000, which you can check by using Trace[..., TraceInternal->True]. And when it does not expand, it computes lots of sums of large coefficient lists dominated by zeros, and this is apparently slower than expanding, for a range of expressions. What I do is to fool Binomial into returning a small number (10), but I also tried to make it such that it will only affect the Binomial called internally by Coefficient:
In[67]:= withFastCoefficient[f[Binomial[7,4]]Coefficient[expr,x,234]]
Out[67]= 3116518719381876183528738595379210 f[35]
I can not however guarantee that there are no examples where Binomial somewhere else in the code will be computed incorrectly.
EDIT
Of course, a safer alternative that always exists is to redefine Coefficient using the Villegas - Gayley trick, expanding an expression inside it and calling it again:
Unprotect[Coefficient];
Module[{inCoefficient},
Coefficient[expr_, args__] :=
Block[{inCoefficient = True},
Coefficient[Expand[expr], args]] /; ! TrueQ[inCoefficient]
];
Protect[Coefficient];
EDIT 2
My first suggestion had an advantage that we defined a macro which modified the properties of functions locally, but disadvantage that it was unsafe. My second suggestion is safer but modifies Coefficient globally, so it will always expand until we remove that definition. We can have the best of both worlds with the help of Internal`InheritedBlock, which creates a local copy of a given function. Here is the code:
ClearAll[withExpandingCoefficient];
SetAttributes[withExpandingCoefficient, HoldFirst];
withExpandingCoefficient[code_] :=
Module[{inCoefficient},
Internal`InheritedBlock[{Coefficient},
Unprotect[Coefficient];
Coefficient[expr_, args__] :=
Block[{inCoefficient = True},
Coefficient[Expand[expr], args]] /; ! TrueQ[inCoefficient];
Protect[Coefficient];
code
]
];
The usage is similar to the first case:
In[92]:= withExpandingCoefficient[Coefficient[expr,x,234]]//Timing
Out[92]= {0.156,3116518719381876183528738595379210}
The main Coefficient function remains unaffected however:
In[93]:= DownValues[Coefficient]
Out[93]= {}
Coefficient will not expand unless it deems it absolutely necessary to do so. This does indeed avoid memory explosions. I believe it has been this way since version 3 (I think I was working on it around 1995 or so).
It can also be faster to avoid expansion. Here is a simple example.
In[28]:= expr = Sum[x^i + y^j + z^k, {i, 15}, {j, 10}, {k, 20}]^20;
In[29]:= Coefficient[expr, x, 234]; // Timing
Out[29]= {0.81, Null}
But this next appears to hang in version 8, and takes at least a half minute in the development Mathematica (where Expand was changed).
Coefficient[Expand[expr], x, 234]; // Timing
Possibly some heuristics should be added to look for univariates that will not explode. Does not seem like a high priority item though.
Daniel Lichtblau
expr = Sum[x^i, {i, 15}]^30;
scoeff[ex_, var_, n_] /; PolynomialQ[ex, var] :=
ex + O[var]^(n + 1) /.
Verbatim[SeriesData][_, 0, c_List, nmin_, nmax_, 1] :>
If[nmax - nmin != Length[c], 0, c[[-1]]];
Timing[scoeff[expr, x, 234]]
seems quite fast, too.
After some experimentation following Rolf Mertig's answer, this appears to be the fastest method on the type of expression such as Sum[x^i, {i, 15}]^30:
SeriesCoefficient[expr, {x, 0, 234}]

How to implement an half-edge data structure in Haskell?

For a description of the data structure see
http://www.flipcode.com/archives/The_Half-Edge_Data_Structure.shtml
http://www.cgal.org/Manual/latest/doc_html/cgal_manual/HalfedgeDS/Chapter_main.html
An half-edge data structure involves cycles.
is it possible to implement it in a functional language like Haskell ?
are mutable references (STRef) to way to go ?
Thanks
In order to efficiently construct half-edge data structures you need an acceleration structure for the HE_vert (let's call it HE_vert_acc... but you can actually just do this in HE_vert directly) that saves all HE_edges that point to this HE_vert. Otherwise you get very bad complexity when trying to define the "HE_edge* pair" (which is the oppositely oriented adjacent half-edge), e.g. via brute-force comparison.
So, making a half-edge data structure for a single face can easily be done with the tying-the-knot method, because there are (probably) no pairs anyway. But if you add the complexity of the acceleration structure to decide on those pairs efficiently, then it becomes a bit more difficult, since you need to update the same HE_vert_acc across different faces, and then update the HE_edges to contain a valid pair. Those are actually multiple steps. How you would glue them all together via tying-the-knot is way more complex than constructing a circular doubly linked list and not really obvious.
Because of that... I wouldn't really bother much about the question "how do I construct this data structure in idiomatic haskell".
I think it's reasonable to use more imperative approaches here while trying to keep the API functional. I'd probably go for arrays and state-monads.
Not saying it isn't possible with tying-the-knot, but I haven't seen such an implementation yet. It is not an easy problem in my opinion.
EDIT: so I couldn't let go and implemented this, assuming the input is an .obj mesh file.
My approach is based on the method described here https://wiki.haskell.org/Tying_the_Knot#Migrated_from_the_old_wiki, but the one from Andrew Bromage where he explains tying the knots for a DFA without knowing the knots at compile-time.
Unfortunately, the half-edge data structure is even more complex, since it actually consists of 3 data structures.
So I started with what I actually want:
data HeVert a = HeVert {
vcoord :: a -- the coordinates of the vertex
, emedge :: HeEdge a -- one of the half-edges emanating from the vertex
}
data HeFace a = HeFace {
bordedge :: HeEdge a -- one of the half-edges bordering the face
}
data HeEdge a = HeEdge {
startvert :: HeVert a -- start-vertex of the half-edge
, oppedge :: Maybe (HeEdge a) -- oppositely oriented adjacent half-edge
, edgeface :: HeFace a -- face the half-edge borders
, nextedge :: HeEdge a -- next half-edge around the face
}
The problem is that we run into multiple issues here when constructing it efficiently, so for all these data structures we will use an "Indirect" one which basically just saves plain information given by the .obj mesh file.
So I came up with this:
data IndirectHeEdge = IndirectHeEdge {
edgeindex :: Int -- edge index
, svindex :: Int -- index of start-vertice
, nvindex :: Int -- index of next-vertice
, indexf :: Int -- index of face
, offsetedge :: Int -- offset to get the next edge
}
data IndirectHeVert = IndirectHeVert {
emedgeindex :: Int -- emanating edge index (starts at 1)
, edgelist :: [Int] -- index of edge that points to this vertice
}
data IndirectHeFace =
IndirectHeFace (Int, [Int]) -- (faceIndex, [verticeindex])
A few things are probably not intuitive and can be done better, e.g. the "offsetedge" thing.
See how I didn't save the actual vertices anywhere. This is just a lot of index stuff which sort of emulates the C pointers.
We will need "edgelist" to efficiently find the oppositely oriented ajdgacent half-edges later.
I don't go into detail how I fill these indirect data structures, because that is really specific to the .obj file format. I'll just give an example on how things convert.
Suppose we have the following mesh file:
v 50.0 50.0
v 250.0 50.0
v 50.0 250.0
v 250.0 250.0
v 50.0 500.0
v 250.0 500.0
f 1 2 4 3
f 3 4 6 5
The indirect faces will now look like this:
[IndirectHeFace (0,[1,2,4,3]),IndirectHeFace (1,[3,4,6,5])]
The indirect edges:
[IndirectHeEdge {edgeindex = 0, svindex = 1, nvindex = 2, indexf = 0, offsetedge = 1},
IndirectHeEdge {1, 2, 4, 0, 1},
IndirectHeEdge {2, 4, 3, 0, 1},
IndirectHeEdge {3, 3, 1, 0, -3},
IndirectHeEdge {0, 3, 4, 1, 1},
IndirectHeEdge {1, 4, 6, 1, 1},
IndirectHeEdge {2, 6, 5, 1, 1},
IndirectHeEdge {3, 5, 3, 1, -3}]
And the indirect vertices:
[(1,IndirectHeVert {emedgeindex = 0, edgelist = [3]}),
(2,IndirectHeVert {1, [0]}),
(3,IndirectHeVert {4, [7,2]}),
(4,IndirectHeVert {5, [4,1]}),
(5,IndirectHeVert {7, [6]}),
(6,IndirectHeVert {6, [5]})]
Now the really interesting part is how we can turn these indirect data structures into the "direct" one we defined at the very beginning. This is a bit tricky, but is basically just index lookups and works because of laziness.
Here's the pseudo code (the actual implementation uses not just lists and has additional overhead in order to make the function safe):
indirectToDirect :: [a] -- parsed vertices, e.g. 2d points (Double, Double)
-> [IndirectHeEdge]
-> [IndirectHeFace]
-> [IndirectHeVert]
-> HeEdge a
indirectToDirect points edges faces vertices
= thisEdge (head edges)
where
thisEdge edge
= HeEdge (thisVert (vertices !! svindex edge) $ svindex edge)
(thisOppEdge (svindex edge) $ indexf edge)
(thisFace $ faces !! indexf edge)
(thisEdge $ edges !! (edgeindex edge + offsetedge edge))
thisFace face = HeFace $ thisEdge (edges !! (head . snd $ face))
thisVert vertice coordindex
= HeVert (points !! (coordindex - 1))
(thisEdge $ points !! (emedgeindex vertice - 1))
thisOppEdge startverticeindex faceindex
= thisEdge
<$>
(headMay
. filter ((/=) faceindex . indexf)
. fmap (edges !!)
. edgelist -- getter
$ vertices !! startverticeindex)
Mind that we cannot really make this return a "Maybe (HeEdge a)" because it would try to evaluate the whole thing (which is infinite) in order to know which constructor to use.
I had to add a NoVert/NoEdge/NoFace constructor for each of them to avoid the "Maybe".
Another downside is that this heavily depends on the input and isn't really a generic library thing. I'm also not entirely sure if it will re-evaluate (which is still very cheap) already visited edges.
Using Data.IntMap.Lazy seems to increase performance (at least for the list of IndirectHeVert). Data.Vector didn't really do much for me here.
There's no need for using the state monad anywhere, unless you want to use Arrays or Vectors.
Obviously the problem is that a half-edge references the next and the opposite half-edge (the other references are no problem). You can "break the cycle" e.g. by referencing not directly to other half-edges, but to reference just an ID (e.g. simple Ints). In order to look up a half-edge by ID, you can store them in a Data.Map. Of course this approach requires some book-keeping in order to avoid a big hairy mess, but it is the easiest way I can think of.
Stupid me, I'm not thinking lazy enough. The solution above works for strict functional languages, but is unnecessary for Haskell.
If the task in question allows you to build the half-edge structure once and then query it many times, then lazy tying-the-know approach is the way to go, as was pointed out in the comments and the other answer.
However, if you want to update your structure, then purely-functional interface might prove cumbersome to work with. Also, you need to consider O(..) requirements for update functions. It might turn out that you need mutable internal representation (probably with pure API on top) after all.
I've run into a helpful application of polymorphism for this sort of thing. You'll commonly desire both a static non-infinite version for serialization, as well as a knot-tyed version for internal representation.
If you make one version that's polymorphic, then you can update that particular value using record syntax :
data Foo edge_type_t = Depot {
edge_type :: edge_type_t,
idxI, idxE, idxF, idxL :: !Int
} deriving (Show, Read)
loadFoo edgetypes d = d { edge_type = edgetypes ! edge_type d }
unloadFoo d = d { edge_type = edgetype_id $ edge_type d }
There is however one major caveat : You cannot make a Foo (Foo (Foo( ...))) type this way because Haskell must understand the type's recursively. :(

Resources