How to select sublists faster in Mathematica? - wolfram-mathematica

My question sounds more general, but I have a specific example. I have a list of data in form:
plotDataAll={{DateList1, integerValue1}, {DateList2, integerValue2}...}
The dates are sorted chronologically, that is plotDataAll[[2,1]] is a more recent time then plotDataAll[[1,1]].
I want to create plots of specific periods, 24h ago, 1 week ago, etc. For that I need just a portion of data. Here's how I got what I wanted:
mostRecentDate=Max[Map[AbsoluteTime, plotDataAll[[All,1]]]];
plotDataLast24h=Select[plotDataAll,AbsoluteTime[#[[1]]]>(mostRecentDate-86400.)&];
plotDataLastWeek=Select[plotDataAll,AbsoluteTime[#[[1]]]>(mostRecentDate-604800.)&];
plotDataLastMonth=Select[plotDataAll,AbsoluteTime[#[[1]]]>(mostRecentDate-2.592*^6)&];
plotDataLast6M=Select[plotDataAll,AbsoluteTime[#[[1]]]>(mostRecentDate-1.5552*^7)&];
Then I used DateListPlot to plot the data. This becomes slow if you need to do this for many sets of data.
What comes to my mind, if I could find the index of first element in list that satisfies the date condition, because it's chronologically sorted, the rest of them should satisfy the condition as well. So I would have:
plotDataLast24h=plotDataAll[[beginningIndexThatSatisfiesLast24h;;Length[plotDataAll]]
But how do I get the index of the first element that satisfies the condition?
If you have a faster way to do this, please share your answer. Also, if you have a simple, faster, but sub-optimal solution, that's fine too.
EDIT:
Time data is not in regular intervals.

If your data is at regular intervals you should be able to know how many elements constitute a day, week, etc. and use Part.
plotDataAll2[[knownIndex;;-1]]
or more specifically if the data was hourly:
plotDataAll2[[-25;;-1]]
would give you the last 24 hours. If the spacing is irregular then use Select or Pick. Date and time functions in Mma are horrendously slow unfortunately. If you are going to do a lot of date and time calculation better to do a conversion to AbsoluteTime just once and then work with that. You will also notice that your DateListPlots render much faster if you use AbsoluteTime.
plotDataAll2=plotDataAll;
plotDataAll2[[All,1]]=AbsoluteTime/#plotDataAll2[[All,1]];
mostRecentDate=plotDataAll2[[-1,1]]
On my computer Pick is about 3 times faster but there may be other improvements you can make to the code below:
selectInterval[data_, interval_] := (tmp = data[[-1, 1]] - interval;
Select[data, #[[1]] > tmp &])
pickInterval[data_, interval_] := (tmp = data[[-1, 1]] - interval;
Pick[data, Sign[data[[All, 1]] - tmp], 1])
So to find data within the last week:
Timing[selectInterval[plotDataAll2, 604800]]
Timing[pickInterval[plotDataAll2, 604800]]

The thing that you want to avoid is checking all the values in the data table. Since the data is sequential you can just start checking from the back and stop when you have found the correct index.
Schematically:
tab = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9};
i = j = Length#tab;
While[tab[[i]] > 5, --i];
tab[[i ;; j]]
-> {5, 6, 7, 8, 9}
sustitute > 5 for whatever you want to check for. I didn't have time to test this right now but in your case, e.g.,
maxDate=AbsoluteTime#plotDataAll[[-1,1]]; (* no need to find Max if data is sequential*)
i24h = iWeek = iMonth = iMax = Length#plotDataAll;
While[AbsoluteTime#plotDataAll[[i24h,1]] > maxDate-86400.,--i24h];
While[AbsoluteTime#plotDataAll[[iWeek,1]] > maxDate-604800.,--iWeek];
While[AbsoluteTime#plotDataAll[[iMonth,1]] > maxDate-2.592*^6.,--iMonth];
While[AbsoluteTime#plotDataAll[[i6Month,1]] > maxDate-1.5552*^7.,--i6Month];
Then, e.g.,
DateListPlot#plotDataAll[[i24h;;iMax]]
If you want to start somewhere in the middle of plotDataAll just use a While to first find the starting point and set iMax and maxDate apropriately.
For large data sets this may be one of the few instances where a loop construct is better than MMA's inbuilt functions. That, however, may be my own ignorance and if anyone here knows of a MMA inbuilt function that does this sort of "stop when match found" comparison better than While.
EDIT: Timing comparisons
I played around a bit with Mike's and my solution and compared it to the OP's method. Here is the toy code I used for each solution
tab = Range#1000000;
(* My solution *)
i = j = tab[[-1]];
While[tab[[i]] > j - 24, --i];
tab[[i ;; j]]
(* Mike's solution *)
tmp = tab[[-1]] - 24;
Pick[tab, Sign[tab[[All]] - tmp], 1]
(* Enedene's solution *)
j = tab[[-1]];
Select[tab, # > (j - 24) &]
Here are the results (OS X, MMA 8.0.4, Core2Duo 2.0GHz)
As you can see, Mike's solution has a definite advantage over enedene's solution but, as I surmised originally, the downside of using inbuilt functions like Pick is that they still perform a comparative check on all the element in a list which is highly superfluous in this instance. My solution has constant time due to the fact that no unneccessary checks are made.

Related

Is there way to remove element from list in Mathematica

Is there function in Wolfram Mathematica for removing element from original list?
For example
a={1,2,3};
DeleteFrom[a,1];
a
a={2,3}
If it is absent can anyone give example of efficient variant of such function?
(I know that there is function Delete() but it will create new list. This is not good if list is big)
If you want to drop the first element from a list a the statement
Drop[a,1]
returns a list the same as a without its first element. Note that this does not update a. To do that you could assign the result to a, eg
a = Drop[a,1]
Note that this is probably exactly what Delete is doing behind the scenes; first making a copy of a without its first element, then assigning the name a to that new list, then freeing the memory used by the old list.
Comparing destructive updates and non-destructive updates in Mathematica is quite complicated and can take one deep into the system's internals. You'll find a lot about the subject on the Stack Exchange Mathematica site.
Every time you change the length of a list in Mathematica you cause a reallocation of the list, which takes O(n) rather than O(1) time. Though no "DeleteFrom" function exists, if it did it would be no faster than a = Delete[a, x].
If you can create in advance a list of all the elements you wish to delete and delete them all at once you will get much better performance. If you cannot you will have to find another way to formulate your problem. I suggest you join us on the proper Stack Exchange site if you have additional questions:
Assign the element to an empty sequence and it will removed from the list. This works for any element.
In[1] := a = {1,2,3}
Out[1]= {1,2,3}
In[2] := a[[1]] = Sequence[]
Out[2] = Sequence[]
In[3] := a
Out[3] = {2,3}
Yes Mathematica tends towards non-destructive programming, but the programmers at Wolfram are pretty clever folks and the code seems to run pretty fast. It hard to believe they would always copy a whole list to change one element, i.e. not make any optimizations.
Improving the user3446498's answer, you can do the following:
In[1] := a = {1,2,3};
In[2] := a[[1]] = Nothing;
In[3] := a
Out[3] = {2,3}
In[4] := a == {2,3}
Out[4] = True
this Nothing symbol was introduced in in the 10th version (2015), see here.
Both solutions from #user3446498 and #pmsoltani won't actually delete the element. Test:
a = {1, 2, 3};
a[[2]] = Sequence[]; (* or Nothing *)
--a[[1]];
++a[[2]];
a
They both output {0, 4, 3}, while {0, 4} is expected.
Replacing 2nd line with a = Delete[a, 2]; would work.

Set::write: Error When Creating Function

I'm very new to Mathematica, and I'm getting pretty frustrated with the errors I'm generating when it comes to creating a function. Below, I have a function I'm writing for 'centering' a matrix where rows correspond to examples, columns to features. The aim is to subtract from each element the mean of the column to which it belongs.
centerdata[datamat_] := (
numdatapoints =
Dimensions[datamat][[1]](*Get number of datapoints*)
numberfeatures =
Dimensions[datamat[[1]]][[1]](*Get number of datapoints*)
columnmean = ((Total[datamat])/numdatapoints)
For[i = 1, i < numdatapoints + 1, i++, (* For each row*)
For[j = 1, j < numfeatures + 1, j++, (* For each element*)
datum = datamat[[i]][[j]];
newval = (datum - (colmean[[j]]));
ReplacePart[datamat, {i, j} -> newval];
];
];
Return[datamat];
)
Running this function for a matrix, I get the following error:
"Set::write: Tag Times in 4 {5.84333,3.054,3.75867,1.19867} is Protected. >>
Set::write: "Tag Times in 4\ 150 is Protected."
Where {5.84333,3.054,3.75867,1.19867} is the first example in the data matrix and 150 is the number of examples in the matrix (I'm using the famous iris dataset, for anyone interested). These errors correspond to this code:
numdatapoints = Dimensions[datamat][[1]](*Get number of datapoints*)
numberfeatures = Dimensions[datamat[[1]]][[1]](*Get number of datapoints*)
Googling and toying with this error hasn't helped much as the replies in general relate to multiplication, which clearly isn't being done here.
Given a table (tab) of data the function Mean[tab] will return a list of the means of each column. Next, you want to subtract this (element-wise) from each row in the table, try this:
Map[Plus[-Mean[tab],#]&,tab]
I have a feeling that there is probably either an intrinsic statistical function to do this in one statement or that I am blind to a much simpler solution.
Since you are a beginner I suggest that you immediately read the documentation for:
Map, which is one of the fundamental operators in functional programming languages such as Mathematica pretends to be; and
pure functions whose use involves the cryptic symbols # and &.
If you are writing loops in Mathematica programs you are almost certainly mis-using the system.

Algorithm for picking pattern free downvalues from a sparse definition list

I have the following problem.
I am developing a stochastic simulator which samples configurations of the system at random and stores the statistics of how many times each configuration has been visited at certain time instances. Roughly the code works like this
f[_Integer][{_Integer..}] :=0
...
someplace later in the code, e.g.,
index = get index;
c = get random configuration (i.e. a tuple of integers, say a pair {n1, n2});
f[index][c] = f[index][c] + 1;
which tags that configuration c has occurred once more in the simulation at time instance index.
Once the code has finished there is a list of definitions for f that looks something like this (I typed it by hand just to emphasize the most important parts)
?f
f[1][{1, 2}] = 112
f[1][{3, 4}] = 114
f[2][{1, 6}] = 216
f[2][{2, 7}] = 227
...
f[index][someconfiguration] = some value
...
f[_Integer][{_Integer..}] :=0
Please note that pattern free definitions that come first can be rather sparse. Also one cannot know which values and configurations will be picked.
The problem is to efficiently extract down values for a desired index, for example issue something like
result = ExtractConfigurationsAndOccurences[f, 2]
which should give a list with the structure
result = {list1, list2}
where
list1 = {{1, 6}, {2, 7}} (* the list of configurations that occurred during the simulation*)
list2 = {216, 227} (* how many times each of them occurred *)
The problem is that ExtractConfigurationsAndOccurences should be very fast. The only solution I could come up with was to use SubValues[f] (which gives the full list) and filter it with Cases statement. I realize that this procedure should be avoided at any cost since there will be exponentially many configurations (definitions) to test, which slows down the code considerably.
Is there a natural way in Mathematica to do this in a fast way?
I was hoping that Mathematica would see f[2] as a single head with many down values but using DownValues[f[2]] gives nothing. Also using SubValues[f[2]] results in an error.
This is a complete rewrite of my previous answer. It turns out that in my previous attempts, I overlooked a much simpler method based on a combination of packed arrays and sparse arrays, that is much faster and more memory - efficient than all previous methods (at least in the range of sample sizes where I tested it), while only minimally changing the original SubValues - based approach. Since the question was asked about the most efficient method, I will remove the other ones from the answer (given that they are quite a bit more complex and take a lot of space. Those who would like to see them can inspect past revisions of this answer).
The original SubValues - based approach
We start by introducing a function to generate the test samples of configurations for us. Here it is:
Clear[generateConfigurations];
generateConfigurations[maxIndex_Integer, maxConfX_Integer, maxConfY_Integer,
nconfs_Integer] :=
Transpose[{
RandomInteger[{1, maxIndex}, nconfs],
Transpose[{
RandomInteger[{1, maxConfX}, nconfs],
RandomInteger[{1, maxConfY}, nconfs]
}]}];
We can generate a small sample to illustrate:
In[3]:= sample = generateConfigurations[2,2,2,10]
Out[3]= {{2,{2,1}},{2,{1,1}},{1,{2,1}},{1,{1,2}},{1,{1,2}},
{1,{2,1}},{2,{1,2}},{2,{2,2}},{1,{2,2}},{1,{2,1}}}
We have here only 2 indices, and configurations where both "x" and "y" numbers vary from 1 to 2 only - 10 such configurations.
The following function will help us imitate the accumulation of frequencies for configurations, as we increment SubValues-based counters for repeatedly occurring ones:
Clear[testAccumulate];
testAccumulate[ff_Symbol, data_] :=
Module[{},
ClearAll[ff];
ff[_][_] = 0;
Do[
doSomeStuff;
ff[#1][#2]++ & ## elem;
doSomeMoreStaff;
, {elem, data}]];
The doSomeStuff and doSomeMoreStaff symbols are here to represent some code that might preclude or follow the counting code. The data parameter is supposed to be a list of the form produced by generateConfigurations. For example:
In[6]:=
testAccumulate[ff,sample];
SubValues[ff]
Out[7]= {HoldPattern[ff[1][{1,2}]]:>2,HoldPattern[ff[1][{2,1}]]:>3,
HoldPattern[ff[1][{2,2}]]:>1,HoldPattern[ff[2][{1,1}]]:>1,
HoldPattern[ff[2][{1,2}]]:>1,HoldPattern[ff[2][{2,1}]]:>1,
HoldPattern[ff[2][{2,2}]]:>1,HoldPattern[ff[_][_]]:>0}
The following function will extract the resulting data (indices, configurations and their frequencies) from the list of SubValues:
Clear[getResultingData];
getResultingData[f_Symbol] :=
Transpose[{#[[All, 1, 1, 0, 1]], #[[All, 1, 1, 1]], #[[All, 2]]}] &#
Most#SubValues[f, Sort -> False];
For example:
In[10]:= result = getResultingData[ff]
Out[10]= {{2,{2,1},1},{2,{1,1},1},{1,{2,1},3},{1,{1,2},2},{2,{1,2},1},
{2,{2,2},1},{1,{2,2},1}}
To finish with the data-processing cycle, here is a straightforward function to extract data for a fixed index, based on Select:
Clear[getResultsForFixedIndex];
getResultsForFixedIndex[data_, index_] :=
If[# === {}, {}, Transpose[#]] &[
Select[data, First## == index &][[All, {2, 3}]]];
For our test example,
In[13]:= getResultsForFixedIndex[result,1]
Out[13]= {{{2,1},{1,2},{2,2}},{3,2,1}}
This is presumably close to what #zorank tried, in code.
A faster solution based on packed arrays and sparse arrays
As #zorank noted, this becomes slow for larger sample with more indices and configurations. We will now generate a large sample to illustrate that (note! This requires about 4-5 Gb of RAM, so you may want to reduce the number of configurations if this exceeds the available RAM):
In[14]:=
largeSample = generateConfigurations[20,500,500,5000000];
testAccumulate[ff,largeSample];//Timing
Out[15]= {31.89,Null}
We will now extract the full data from the SubValues of ff:
In[16]:= (largeres = getResultingData[ff]); // Timing
Out[16]= {10.844, Null}
This takes some time, but one has to do this only once. But when we start extracting data for a fixed index, we see that it is quite slow:
In[24]:= getResultsForFixedIndex[largeres,10]//Short//Timing
Out[24]= {2.687,{{{196,26},{53,36},{360,43},{104,144},<<157674>>,{31,305},{240,291},
{256,38},{352,469}},{<<1>>}}}
The main idea we will use here to speed it up is to pack individual lists inside the largeres, those for indices, combinations and frequencies. While the full list can not be packed, those parts individually can:
In[18]:= Timing[
subIndicesPacked = Developer`ToPackedArray[largeres[[All,1]]];
subCombsPacked = Developer`ToPackedArray[largeres[[All,2]]];
subFreqsPacked = Developer`ToPackedArray[largeres[[All,3]]];
]
Out[18]= {1.672,Null}
This also takes some time, but it is a one-time operation again.
The following functions will then be used to extract the results for a fixed index much more efficiently:
Clear[extractPositionFromSparseArray];
extractPositionFromSparseArray[HoldPattern[SparseArray[u___]]] := {u}[[4, 2, 2]]
Clear[getCombinationsAndFrequenciesForIndex];
getCombinationsAndFrequenciesForIndex[packedIndices_, packedCombs_,
packedFreqs_, index_Integer] :=
With[{positions =
extractPositionFromSparseArray[
SparseArray[1 - Unitize[packedIndices - index]]]},
{Extract[packedCombs, positions],Extract[packedFreqs, positions]}];
Now, we have:
In[25]:=
getCombinationsAndFrequenciesForIndex[subIndicesPacked,subCombsPacked,subFreqsPacked,10]
//Short//Timing
Out[25]= {0.094,{{{196,26},{53,36},{360,43},{104,144},<<157674>>,{31,305},{240,291},
{256,38},{352,469}},{<<1>>}}}
We get a 30 times speed-up w.r.t. the naive Select approach.
Some notes on complexity
Note that the second solution is faster because it uses optimized data structures, but its complexity is the same as that of Select- based one, which is, linear in the length of total list of unique combinations for all indices. Therefore, in theory, the previously - discussed solutions based on nested hash-table etc may be asymptotically better. The problem is, that in practice we will probably hit the memory limitations long before that. For the 10 million configurations sample, the above code was still 2-3 times faster than the fastest solution I posted before.
EDIT
The following modification:
Clear[getCombinationsAndFrequenciesForIndex];
getCombinationsAndFrequenciesForIndex[packedIndices_, packedCombs_,
packedFreqs_, index_Integer] :=
With[{positions =
extractPositionFromSparseArray[
SparseArray[Unitize[packedIndices - index], Automatic, 1]]},
{Extract[packedCombs, positions], Extract[packedFreqs, positions]}];
makes the code twice faster still. Moreover, for more sparse indices (say, calling the sample-generation function with parameters like generateConfigurations[2000, 500, 500, 5000000] ), the speed-up with respect to the Select- based function is about 100 times.
I'd probably use SparseArrays here (see update below), but if you insist on using functions and *Values to store and retrieve values an approach would be to have the first part (f[2] etc.) replaced by a symbol you create on the fly like:
Table[Symbol["f" <> IntegerString[i, 10, 3]], {i, 11}]
(* ==> {f001, f002, f003, f004, f005, f006, f007, f008, f009, f010, f011} *)
Symbol["f" <> IntegerString[56, 10, 3]]
(* ==> f056 *)
Symbol["f" <> IntegerString[56, 10, 3]][{3, 4}] = 12;
Symbol["f" <> IntegerString[56, 10, 3]][{23, 18}] = 12;
Symbol["f" <> IntegerString[56, 10, 3]] // Evaluate // DownValues
(* ==> {HoldPattern[f056[{3, 4}]] :> 12, HoldPattern[f056[{23, 18}]] :> 12} *)
f056 // DownValues
(* ==> {HoldPattern[f056[{3, 4}]] :> 12, HoldPattern[f056[{23, 18}]] :> 12} *)
Personally I prefer Leonid's solution, as it's much more elegant but YMMV.
Update
On OP's request, about using SparseArrays:
Large SparseArrays take up a fraction of the size of standard nested lists. We can make f to be a large (100,000 entires) sparse array of sparse arrays:
f = SparseArray[{_} -> 0, 100000];
f // ByteCount
(* ==> 672 *)
(* initialize f with sparse arrays, takes a few seconds with f this large *)
Do[ f[[i]] = SparseArray[{_} -> 0, {100, 110}], {i,100000}] // Timing//First
(* ==> 18.923 *)
(* this takes about 2.5% of the memory that a normal array would take: *)
f // ByteCount
(* ==> 108000040 *)
ConstantArray[0, {100000, 100, 100}] // ByteCount
(* ==> 4000000176 *)
(* counting phase *)
f[[1]][[1, 2]]++;
f[[1]][[1, 2]]++;
f[[1]][[42, 64]]++;
f[[2]][[100, 11]]++;
(* reporting phase *)
f[[1]] // ArrayRules
f[[2]] // ArrayRules
f // ArrayRules
(*
==>{{1, 2} -> 2, {42, 64} -> 1, {_, _} -> 0}
==>{{100, 11} -> 1, {_, _} -> 0}
==>{{1, 1, 2} -> 2, {1, 42, 64} -> 1, {2, 100, 11} -> 1, {_, _, _} -> 0}
*)
As you can see, ArrayRules makes a nice list with contributions and counts. This can be done for each f[i] separately or the whole bunch together (last line).
In some scenarios (depending upon the performance needed to generate the values), the following easy solution using an auxiliary list (f[i,0]) may be useful:
f[_Integer][{_Integer ..}] := 0;
f[_Integer, 0] := Sequence ## {};
Table[
r = RandomInteger[1000, 2];
f[h = RandomInteger[100000]][r] = RandomInteger[10];
f[h, 0] = Union[f[h, 0], {r}];
, {i, 10^6}];
ExtractConfigurationsAndOccurences[f_, i_] := {f[i, 0], f[i][#] & /# f[i, 0]};
Timing#ExtractConfigurationsAndOccurences[f, 10]
Out[252]= {4.05231*10^-15, {{{172, 244}, {206, 115}, {277, 861}, {299,
862}, {316, 194}, {361, 164}, {362, 830}, {451, 306}, {614,
769}, {882, 159}}, {5, 2, 1, 5, 4, 10, 4, 4, 1, 8}}}
Many thanks for everyone on the help provided. I've been thinking a lot about everybody's input and I believe that in the simulation setup the following is the optimal solution:
SetAttributes[linkedList, HoldAllComplete];
temporarySymbols = linkedList[];
SetAttributes[bookmarkSymbol, Listable];
bookmarkSymbol[symbol_]:=
With[{old = temporarySymbols}, temporarySymbols= linkedList[old,symbol]];
registerConfiguration[index_]:=registerConfiguration[index]=
Module[
{
cs = linkedList[],
bookmarkConfiguration,
accumulator
},
(* remember the symbols we generate so we can remove them later *)
bookmarkSymbol[{cs,bookmarkConfiguration,accumulator}];
getCs[index] := List ## Flatten[cs, Infinity, linkedList];
getCsAndFreqs[index] := {getCs[index],accumulator /# getCs[index]};
accumulator[_]=0;
bookmarkConfiguration[c_]:=bookmarkConfiguration[c]=
With[{oldCs=cs}, cs = linkedList[oldCs, c]];
Function[c,
bookmarkConfiguration[c];
accumulator[c]++;
]
]
pattern = Verbatim[RuleDelayed][Verbatim[HoldPattern][HoldPattern[registerConfiguration [_Integer]]],_];
clearSimulationData :=
Block[{symbols},
DownValues[registerConfiguration]=DeleteCases[DownValues[registerConfiguration],pattern];
symbols = List ## Flatten[temporarySymbols, Infinity, linkedList];
(*Print["symbols to purge: ", symbols];*)
ClearAll /# symbols;
temporarySymbols = linkedList[];
]
It is based on Leonid's solution from one of previous posts, appended with belsairus' suggestion to include extra indexing for configurations that have been processed. Previous approaches are adapted so that configurations can be naturally registered and extracted using the same code more or less. This is hitting two flies at once since bookkeeping and retrieval and strongly interrelated.
This approach will work better in the situation when one wants to add simulation data incrementally (all curves are normally noisy so one has to add runs incrementally to obtain good plots). The sparse array approach will work better when data are generated in one go and then analyzed, but I do not remember being personally in such a situation where I had to do that.
Also, I was rather naive thinking that the data extraction and generation could be treated separately. In this particular case it seems one should have both perspectives in mind. I profoundly apologise for bluntly dismissing any previous suggestions in this direction (there were few implicit ones).
There are some open/minor problems that I do not know how to handle, e.g. when clearing the symbols I cannot clear headers like accumulator$164, I can only clean subvalues associated with it. Have not clue why. Also, if With[{oldCs=cs}, cs = linkedList[oldCs, c]]; is changed into something like cs = linkedList[cs, c]]; configurations are not stored. Have no clue either why the second option does not work. But these minor problems are well defined satellite issues that one can address in the future. By and large the problem seems solved by the generous help from all involved.
Many thanks again for all the help.
Regards
Zoran
p.s. There are some timings, but to understand what is going on I will append the code that is used for benchmarking. In brief, idea is to generate lists of configurations and just Map through them by invoking registerConfiguration. This essentially simulates data generation process. Here is the code used for testing:
fillSimulationData[sampleArg_] :=MapIndexed[registerConfiguration[#2[[1]]][#1]&, sampleArg,{2}];
sampleForIndex[index_]:=
Block[{nsamples,min,max},
min = Max[1,Floor[(9/10)maxSamplesPerIndex]];
max = maxSamplesPerIndex;
nsamples = RandomInteger[{min, max}];
RandomInteger[{1,10},{nsamples,ntypes}]
];
generateSample :=
Table[sampleForIndex[index],{index, 1, nindexes}];
measureGetCsTime :=((First # Timing[getCs[#]])& /# Range[1, nindexes]) // Max
measureGetCsAndFreqsTime:=((First # Timing[getCsAndFreqs[#]])& /# Range[1, nindexes]) // Max
reportSampleLength[sampleArg_] := StringForm["Total number of confs = ``, smallest accumulator length ``, largest accumulator length = ``", Sequence## {Total[#],Min[#],Max[#]}& [Length /# sampleArg]]
The first example is relatively modest:
clearSimulationData;
nindexes=100;maxSamplesPerIndex = 1000; ntypes = 2;
largeSample1 = generateSample;
reportSampleLength[largeSample1];
Total number of confs = 94891, smallest accumulator length 900, largest accumulator length = 1000;
First # Timing # fillSimulationData[largeSample1]
gives 1.375 secs which is fast I think.
With[{times = Table[measureGetCsTime, {50}]},
ListPlot[times, Joined -> True, PlotRange -> {0, Max[times]}]]
gives times around 0.016 secs, and
With[{times = Table[measureGetCsAndFreqsTime, {50}]},
ListPlot[times, Joined -> True, PlotRange -> {0, Max[times]}]]
gives same times. Now the real killer
nindexes = 10; maxSamplesPerIndex = 100000; ntypes = 10;
largeSample3 = generateSample;
largeSample3 // Short
{{{2,2,1,5,1,3,7,9,8,2},92061,{3,8,6,4,9,9,7,8,7,2}},8,{{4,10,1,5,9,8,8,10,8,6},95498,{3,8,8}}}
reported as
Total number of confs = 933590, smallest accumulator length 90760, largest accumulator length = 96876
gives generation times of ca 1.969 - 2.016 secs which is unbeliavably fast. I mean this is like going through the gigantic list of ca one million elements and applying a function to each element.
The extraction times for configs and {configs, freqs} are roughly 0.015 and 0.03 secs respectivelly.
To me this is a mind blowing speed I would never expect from Mathematica!

Mathematica "AppendTo" function problem

I'm a newbie in Mathematica and I'm having a major malfunction with adding columns to a data table. I'm running Mathematica 7 in Vista. I have spent a lot of time RFD before asking here.
I have a data table (mydata) with three columns and five rows. I'm trying to add two lists of five elements to the table (effectively adding two columns to the data table).
This works perfectly:
Table[AppendTo[mydata[[i]],myfirstlist[[i]]],{i,4}]
Printing out the table with: mydata // TableForm shows the added column.
However, when I try to add my second list
Table[AppendTo[mydata[[i]],mysecondlist[[i]]],{i,5}]
either Mathematica crashes(!) or I get a slew of Part::partw and Part::spec errors saying Part 5 does not exist.
However, after all the error messages (if Mathematica does not crash), again printing out the data table with: mydata // TableForm shows the data table with five columns just like I intended. All TableForm formatting options on the revised data table work fine.
Could anyone tell me what I'm doing wrong? Thanks in advance!
Let's try to clarify what the double transpose method consists of. I make no claims about the originality of the approach. My focus is on the clarity of exposition.
Let's begin with 5 lists. First we'll place three in a table. Then we'll add the final two.
food = {"bagels", "lox", "cream cheese", "coffee", "blueberries"};
mammals = {"fisher cat", "weasel", "skunk", "raccon", "squirrel"};
painters = {"Picasso", "Rembrandt", "Klee", "Rousseau", "Warhol"};
countries = {"Brazil", "Portugal", "Azores", "Guinea Bissau",
"Cape Verde"};
sports = {"golf", "badminton", "football", "tennis", "rugby"};
The first three lists--food, mammals, painters--become the elements of lists3. They are just lists, but TableForm displays them in a table as rows.
(lists3 = {food, mammals, painters}) // TableForm
mydata will be the name for lists3 transposed. Now the three lists appear as columns. That's what transposition does: columns and rows are switched.
(mydata = Transpose#lists3) // TableForm
This is where the problem actually begins. How can we add two additional columns (that is, the lists for countries and sports)? So let's work with the remaining two lists.
(lists2 = {countries, sports}) // TableForm
So we can Join Transpose[mydata] and lists2....
(lists5 = Join[Transpose[mydata], lists2]) // TableForm
[Alternatively, we might have Joined lists3 and lists2 because the second transposition, the transposition of mydata undoes the earlier transposition.
lists3 is just the transposition of mydata. (and vice-versa).]
In[]:= lists3 === Transpose[mydata]
Out[]:= True
Now we only need to Transpose the result to obtain the desired final table of five lists, each occupying its own column:
Transpose#lists5 // TableForm
I hope that helps shed some light on how to add two columns to a table. I find this way reasonably clear. You may find some other way clearer.
There are several things to cover here. First, the following code does not give me any errors, so there may be something else going on here. Perhaps you should post a full code block that produces the error.
mydata = Array[Subscript[{##}] &, {5, 3}];
myfirstlist = Range[1, 5];
mysecondlist = Range[6, 10];
Table[AppendTo[mydata[[i]], myfirstlist[[i]]], {i, 4}];
mydata // TableForm
Table[AppendTo[mydata[[i]], mysecondlist[[i]]], {i, 5}];
mydata // TableForm
Second, there is no purpose in using Table here, as you are modifying mydata directly. Table will use up memory pointlessly.
Third, there are better ways to accomplish this task.
See How to prepend a column and Inserting into a 2d list
I must retract my definitive statement that there are better ways. After changing Table to Do and running a few quick tests, this appears to be a competitive method for some data.
I am using Mathematica 7, so that does not appear to be the problem.
As mentioned before, there are better alternatives to adding columns to a list, and like Gareth and Mr.Wizard, I do not seem to be able to reproduce the problem on v. 7. But, I want to focus on the error itself, and see if we can correct it that way. When Mathematica produces the message Part::partw it spits out part of the offending list like
Range[1000][[1001]]
Part::partw: Part 1001 of
{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,
31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,<<950>>}
does not exist.
So, the question I ask is which list is giving me the problems? My best guess is it is mysecondlist, and I'd check Length # mysecondlist to see if it is actually 5 elements long.
Well, here's my two cents with what I believe is a very fast and IMHO most easily understandable construction.
First, some test arrays:
m = RandomInteger[100, {2000, 10000}];
l1 = RandomInteger[100, 2000];
l2 = RandomInteger[100, 2000];
{r, c} = Dimensions[m];
I increased the test array sizes somewhat to improve accuracy of the following timing measurements.
The method involves the invoking of the powers of Part ([[...]]), All and Span (;;).
Basically, I set up a new working matrix with the future dimensions of the data array after addition of the two columns, then add the original matrix using All and Span and add the additional columns with All only. I then copy back the scrap matrix to our original matrix, as the other methods also return the modified data matrix.
n = ConstantArray[0, {r, c} + {0, 2}];
n[[All, 1 ;; c]] = m;
n[[All, c + 1]] = l1;
n[[All, c + 2]] = l2;
m = n;
As for timing:
Mean[
Table[
First[
AbsoluteTiming[
n2 = ConstantArray[0, {r, c} + {0, 2}];
n2[[All, 1 ;; c]] = m;
n2[[All, c + 1]] = l1;
n2[[All, c + 2]] = l2;
m2 = n2;
]
],
{10}
]
]
0.1056061
(an average of 10 runs)
The other proposed method with Do (Mr.Wizard and the OP):
Mean[
Table[
n1 = m;
First[
AbsoluteTiming[
Do[AppendTo[n1[[i]], l1[[i]]], {i, 2000}];
Do[AppendTo[n1[[i]], l2[[i]]], {i, 2000}];
]
],
{10}
]
]
0.4898280
The result is the same:
In[9]:= n2 == n1
Out[9]= True
So, a conceptually easy and quick (5 times faster!) method.
I tried to reproduce this but failed. I'm running Mma 8 on Windows XP; it doesn't seem like the difference should matter, but who knows? I said, successively,
myData = {{1, 2, 3}, {2, 3, 4}, {8, 9, 10}, {1, 1, 1}, {2, 2, 2}}
myFirstList = {9, 9, 9, 9, 9}
mySecondList = {6, 6, 6, 6, 6}
Table[AppendTo[myData[[i]], myFirstList[[i]]], {i, 4}]
Table[AppendTo[myData[[i]], mySecondList[[i]]], {i, 5}]
myData // TableForm
and got (0) no crash, (1) no errors or warnings, and (2) the output I expected. (Note: I used 4 rather than 5 in the limit of the first set of appends, just like in your question, in case that was somehow provoking trouble.)
The Mma documentation claims that AppendTo[a,b] is always equivalent to a=Append[a,b], which suggests that it isn't modifying the list in-place. But I wonder whether maybe AppendTo sometimes does modify the list when it thinks it's safe to do so; then if it thinks it's safe and it isn't, there could be nasty consequences. Do the weird error messages and crashes still happen if you replace AppendTo with Append + ordinary assignment?

sorting algorithm where pairwise-comparison can return more information than -1, 0, +1

Most sort algorithms rely on a pairwise-comparison the determines whether A < B, A = B or A > B.
I'm looking for algorithms (and for bonus points, code in Python) that take advantage of a pairwise-comparison function that can distinguish a lot less from a little less or a lot more from a little more. So perhaps instead of returning {-1, 0, 1} the comparison function returns {-2, -1, 0, 1, 2} or {-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5} or even a real number on the interval (-1, 1).
For some applications (such as near sorting or approximate sorting) this would enable a reasonable sort to be determined with less comparisons.
The extra information can indeed be used to minimize the total number of comparisons. Calls to the super_comparison function can be used to make deductions equivalent to a great number of calls to a regular comparsion function. For example, a much-less-than b and c little-less-than b implies a < c < b.
The deductions cans be organized into bins or partitions which can each be sorted separately. Effectively, this is equivalent to QuickSort with n-way partition. Here's an implementation in Python:
from collections import defaultdict
from random import choice
def quicksort(seq, compare):
'Stable in-place sort using a 3-or-more-way comparison function'
# Make an n-way partition on a random pivot value
segments = defaultdict(list)
pivot = choice(seq)
for x in seq:
ranking = 0 if x is pivot else compare(x, pivot)
segments[ranking].append(x)
seq.clear()
# Recursively sort each segment and store it in the sequence
for ranking, segment in sorted(segments.items()):
if ranking and len(segment) > 1:
quicksort(segment, compare)
seq += segment
if __name__ == '__main__':
from random import randrange
from math import log10
def super_compare(a, b):
'Compare with extra logarithmic near/far information'
c = -1 if a < b else 1 if a > b else 0
return c * (int(log10(max(abs(a - b), 1.0))) + 1)
n = 10000
data = [randrange(4*n) for i in range(n)]
goal = sorted(data)
quicksort(data, super_compare)
print(data == goal)
By instrumenting this code with the trace module, it is possible to measure the performance gain. In the above code, a regular three-way compare uses 133,000 comparisons while a super comparison function reduces the number of calls to 85,000.
The code also makes it easy to experiment with a variety comparison functions. This will show that naïve n-way comparison functions do very little to help the sort. For example, if the comparison function returns +/-2 for differences greater than four and +/-1 for differences four or less, there is only a modest 5% reduction in the number of comparisons. The root cause is that the course grained partitions used in the beginning only have a handful of "near matches" and everything else falls in "far matches".
An improvement to the super comparison is to covers logarithmic ranges (i.e. +/-1 if within ten, +/-2 if within a hundred, +/- if within a thousand.
An ideal comparison function would be adaptive. For any given sequence size, the comparison function should strive to subdivide the sequence into partitions of roughly equal size. Information theory tells us that this will maximize the number of bits of information per comparison.
The adaptive approach makes good intuitive sense as well. People should first be partitioned into love vs like before making more refined distinctions such as love-a-lot vs love-a-little. Further partitioning passes should each make finer and finer distinctions.
You can use a modified quick sort. Let me explain on an example when you comparison function returns [-2, -1, 0, 1, 2]. Say, you have an array A to sort.
Create 5 empty arrays - Aminus2, Aminus1, A0, Aplus1, Aplus2.
Pick an arbitrary element of A, X.
For each element of the array, compare it with X.
Depending on the result, place the element in one of the Aminus2, Aminus1, A0, Aplus1, Aplus2 arrays.
Apply the same sort recursively to Aminus2, Aminus1, Aplus1, Aplus2 (note: you don't need to sort A0, as all he elements there are equal X).
Concatenate the arrays to get the final result: A = Aminus2 + Aminus1 + A0 + Aplus1 + Aplus2.
It seems like using raindog's modified quicksort would let you stream out results sooner and perhaps page into them faster.
Maybe those features are already available from a carefully-controlled qsort operation? I haven't thought much about it.
This also sounds kind of like radix sort except instead of looking at each digit (or other kind of bucket rule), you're making up buckets from the rich comparisons. I have a hard time thinking of a case where rich comparisons are available but digits (or something like them) aren't.
I can't think of any situation in which this would be really useful. Even if I could, I suspect the added CPU cycles needed to sort fuzzy values would be more than those "extra comparisons" you allude to. But I'll still offer a suggestion.
Consider this possibility (all strings use the 27 characters a-z and _):
11111111112
12345678901234567890
1/ now_is_the_time
2/ now_is_never
3/ now_we_have_to_go
4/ aaa
5/ ___
Obviously strings 1 and 2 are more similar that 1 and 3 and much more similar than 1 and 4.
One approach is to scale the difference value for each identical character position and use the first different character to set the last position.
Putting aside signs for the moment, comparing string 1 with 2, the differ in position 8 by 'n' - 't'. That's a difference of 6. In order to turn that into a single digit 1-9, we use the formula:
digit = ceiling(9 * abs(diff) / 27)
since the maximum difference is 26. The minimum difference of 1 becomes the digit 1. The maximum difference of 26 becomes the digit 9. Our difference of 6 becomes 3.
And because the difference is in position 8, out comparison function will return 3x10-8 (actually it will return the negative of that since string 1 comes after string 2.
Using a similar process for strings 1 and 4, the comparison function returns -5x10-1. The highest possible return (strings 4 and 5) has a difference in position 1 of '-' - 'a' (26) which generates the digit 9 and hence gives us 9x10-1.
Take these suggestions and use them as you see fit. I'd be interested in knowing how your fuzzy comparison code ends up working out.
Considering you are looking to order a number of items based on human comparison you might want to approach this problem like a sports tournament. You might allow each human vote to increase the score of the winner by 3 and decrease the looser by 3, +2 and -2, +1 and -1 or just 0 0 for a draw.
Then you just do a regular sort based on the scores.
Another alternative would be a single or double elimination tournament structure.
You can use two comparisons, to achieve this. Multiply the more important comparison by 2, and add them together.
Here is a example of what I mean in Perl.
It compares two array references by the first element, then by the second element.
use strict;
use warnings;
use 5.010;
my #array = (
[a => 2],
[b => 1],
[a => 1],
[c => 0]
);
say "$_->[0] => $_->[1]" for sort {
($a->[0] cmp $b->[0]) * 2 +
($a->[1] <=> $b->[1]);
} #array;
a => 1
a => 2
b => 1
c => 0
You could extend this to any number of comparisons very easily.
Perhaps there's a good reason to do this but I don't think it beats the alternatives for any given situation and certainly isn't good for general cases. The reason? Unless you know something about the domain of the input data and about the distribution of values you can't really improve over, say, quicksort. And if you do know those things, there are often ways that would be much more effective.
Anti-example: suppose your comparison returns a value of "huge difference" for numbers differing by more than 1000, and that the input is {0, 10000, 20000, 30000, ...}
Anti-example: same as above but with input {0, 10000, 10001, 10002, 20000, 20001, ...}
But, you say, I know my inputs don't look like that! Well, in that case tell us what your inputs really look like, in detail. Then someone might be able to really help.
For instance, once I needed to sort historical data. The data was kept sorted. When new data were added it was appended, then the list was run again. I did not have the information of where the new data was appended. I designed a hybrid sort for this situation that handily beat qsort and others by picking a sort that was quick on already sorted data and tweaking it to be fast (essentially switching to qsort) when it encountered unsorted data.
The only way you're going to improve over the general purpose sorts is to know your data. And if you want answers you're going to have to communicate that here very well.

Resources