Finding equal or similar sequences in a list - wolfram-mathematica

I need to transform a list into an empty one if it has two non-overlapping sequences with a certain property. E.g. in this case, equals and with length >= 2)
mm[{___, b1__, ___, b1__, ___}] := {} /; Length[{b1}] >= 2
or, for example 2 sublists, with a similarity function equal 0 and
Legth = 4
mm[{___, b1__, ___, b2__, ___}] := {} /;
NeedlemanWunschSimilarity[{b1}, {b2}] == 0 && Length[{b1}] == 4 &&
Length[{b2}] == 4
and so on.....
For mathematica, guru who know internal algorithm, is there a faster
way, perhaps using PatternTest (?) ?

I await a fuller description of what you are doing, but in the mean time:
mm2[{___, b1 : Repeated[_, {4}], ___, b2 : Repeated[_, {4}], ___}] := {} /;
NeedlemanWunschSimilarity[{b1}, {b2}] == 0
This uses Repeated to test only sequences of the correct length.
You can specify a minimum and maximum sequence length with Repeated[_, {min, max}].

Related

Modulo of negative integers in Go

I am learning Go and I come from a Python background.
Recently, I stumbled onto a behaviour of the %(modulo) operator which is different from the corresponding operator in Python. Quite contrary to the definition of modular operation and remainder, the modulus of negative integers by a positive integer returns a negative value.
Example:
Python
a, b, n = -5, 5, 3
for i in range(a, b):
print(i%n)
Output:
1
2
0
1
2
0
1
2
0
1
Go
a, b, n := -5, 5, 3
for i:=a; i<b; i++ {
fmt.Println(i%n)
}
Output:
-2
-1
0
-2
-1
0
1
2
0
1
After reading about the Modulo operator and few similar questions asked about the reason behind these differences, I understand that these were due to design goals of the concerned languages.
Is there a built-in functionality in Go which replicates the modulus operation of Python?
Alternate: Is there an internal method for computing the "modulus" instead of the "remainder"?
See this comment by one of the language designers:
There are a several reasons for the current definition:
the current semantics for % is directly available as a result from x86 architectures
it would be confusing to change the meaning of the elementary operator % and not change its name
it's fairly easy to compute another modulus from the % result
Note that % computes the "remainder" as opposed to the "modulus".
There is not an operator or function in the standard library which replicates the modulus operation of Python.
It is possible to write a function which replicates the modulus operation of Python:
func modLikePython(d, m int) int {
var res int = d % m
if ((res < 0 && m > 0) || (res > 0 && m < 0)) {
return res + m
}
return res
}
Note that in Python 5 % -3 is -1 and this code replicates that behavior as well. If you don't want that, remove the second part after || in the if statement.
Is there an internal method for computing the "modulus" instead of the "remainder"?
Note that % computes the "remainder" as opposed to the "modulus".
These quotes are a bit misleading.
Look up any definition of "modulo", by and large it will say that it is the remainder after division. The problem is that when we say "the remainder", it implies that there is only one. When negative numbers are involved, there can be more than one distinct remainder. On the Wikipedia page for Remainder, it differentiates between the least positive remainder and the least absolute remainder. You could also add a least negative remainder (least negative meaning negative, but closest to 0).
Generally for modulus operators, if it returned a positive value, it was the least positive remainder and if it returned a negative value, it was the least negative remainder. The sign of the returned value can be determined in multiple ways. For example given c = a mod b, you could define the sign of c to be
The sign of a (what % does in Go)
The sign of b (what % does in Python)
Non-negative always
Here's a list of programming languages and their modulo implementations defined in this way https://en.wikipedia.org/wiki/Modulo_operation#In_programming_languages
Here's a branchless way to replicate Python's % operator with a Go function
func mod(a, b int) int {
return (a % b + b) % b
}
To reiterate, this follows the rule:
given c = a mod b, the sign of c will be the sign of b.
Or in other words, the modulus result has the same sign as the divisor
math/big does Euclidean modulus:
package main
import "math/big"
func mod(x, y int64) int64 {
bx, by := big.NewInt(x), big.NewInt(y)
return new(big.Int).Mod(bx, by).Int64()
}
func main() {
z := mod(-5, 3)
println(z == 1)
}
https://golang.org/pkg/math/big#Int.Mod
On Q2, you could use:
func modNeg(v, m int) int {
return (v%m + m) % m
}
Would output:
modNeg(-1, 5) => 4
modNeg(-2, 3) => 0
In most cases, just add the second number to the result:
Python:
-8%6 => 4
Golang:
-8%6 + 6 => 4
So the function will be like this:
func PyMod(d int, m int) int {
d %= m
if d < 0 {
d += m
}
return d
}
It works for some other situations such as a%-b in addition to -a%b.
But if you want it to work even for -a%-b, do like this:
func PyMod(d int, m int) int {
// Add this condition at the top
if d < 0 && m < 0 {
return d % m
}
d %= m
if d < 0 {
d += m
}
return d
}

Magic Square of n order in wolfram mathematica

please can you help me with creation of function in Wolfram Mathematica for magic square. I must create function MagicSquare[n_], which output is sqare matrix of first n^2 integers, and sum of these integers in every column, every row, and on diagonals must be the same. Please help me, I try this for a days and I failed. I need this for my school assignment.
Here is a simple brute-force approach. Note the check value m is the magic constant.
(Setting the random values to the array variables makes nifty use of HoldFirst.)
n = 3;
m = n (n^2 + 1)/2;
check = {0};
While[Unequal[Union[check], {m}],
Clear[s];
x = Table[s[i, j], {i, 1, n}, {j, 1, n}];
d1 = Diagonal[x];
d2 = Diagonal[Reverse[x]];
cols = Transpose[x];
vars = Flatten[x];
rand = RandomSample[Range[n^2], n^2];
MapThread[Function[{v, r}, v = r, HoldFirst], {vars, rand}];
check = Total /# Join[x, cols, {d1, d2}]];
MatrixForm[x]
8 3 4
1 5 9
6 7 2
Here is another brute force approach that works for n=3 ..
n = 3
m = n (n^2 + 1) /2
Select[
Partition[# , n] & /#
Permutations[Range[n^2]],
(Union #(Total /# # )) == {m} &&
(Union #(Total /# Transpose[#] )) == {m} &&
Total#Diagonal[#] == m &&
Total#Diagonal[Reverse##] == m & ][[1]] // MatrixForm
This has the advantage of immediately producing an out of memory error for larger n, while Chris' will run approximately forever. :)

What is the fastest algorithm for checking if a 14 digit number is prime?

I need the fastest possible program for primality check on a 14 digit(or bigger) number. I searched on multiple sites but i'm not sure the ones I found will work with numbers as big as this.
A 14-digit number is not very big as far as prime testing is concerned. When the number has some special structure, specialised tests may be available that are faster (e.g. if it's a Mersenne number), but in general, the fastest tests for numbers in that range are
Start trial division by some small numbers. If you plan to do many checks, it's worth to make a list of the n smallest primes, so that the trial division only divides by primes, for a single test, just avoiding even test divisors (except 2), and multiples of 3 (except 3), is good enough. What "small" means is up to interpretation, choices between 100 and 10000 for the cutoff seem reasonable, that many (few) divisions are still quickly done, and they find the overwhelming majority of composite numbers.
If the trial division has not determined the number as composite (or prime, if it's actually smaller than the square of the cutoff), you can use one of the fast probabilistic prime tests that are known to be definitive for the range you're interested in, the usual candidates are
the Baillie/Pomerance/Selfridge/Wagstaff test, a strong Fermat test for base 2, followed by a test for being a square and a (strong) Lucas test. That doesn't have false positives below 264, so it's definitive for numbers with 14-18 digits.
strong Fermat tests for a collection of bases known to be definitive for the range considered. According to Chris Caldwell's prime pages, "If n < 341,550,071,728,321 is a 2, 3, 5, 7, 11, 13 and 17-SPRP, then n is prime".
Somewhat slower, and considerably harder to implement, would be the fast deterministic general-purpose prime tests, APR-CL, ECPP, AKS. They should already beat pure trial division for numbers of 14 or more digits, but be much slower than the incidentally-known-to-be-correct-for-the-range probabilistic tests.
But, depending on your use-case, the best method could also be to sieve a contiguous range of numbers (If you want to find the primes between 1014-109 and 1014, for example, a sieve would be much faster than several hundred million fast individual prime tests).
As Daniel Fischer notes, a 14-digit number isn't particularly large for primality testing. That gives you several options. The first is simple trial division:
function isPrime(n)
d := 2
while d * d <= n
if n % d == 0
return Composite
d := d + 1
return Prime
The square root of 10^14 is 10^7, so that might take a little while. Somewhat faster is to use a prime wheel:
struct wheel(delta[0 .. len-1], len, cycle)
w := wheel([1,2,2,4,2,4,2,4,6,2,6], 11, 3)
function isPrime(n, w)
d := 2; next := 0
while d * d <= n
if n % d == 0
return Composite
else
d := d + w.delta[next]
next := next + 1
if next == w.len
next := w.cycle
return Prime
That should speed up the naive trial division by a factor of 2 or 3 times, which might be sufficient for your needs.
A better option is probably a Miller-Rabin pseudoprimality tester. Start with a strong pseudoprime test:
function isStrongPseudoprime(n, a)
d := n - 1; s := 0
while d is even
d := d / 2; s := s + 1
t := powerMod(a, d, n)
if t == 1 return ProbablyPrime
while s > 0
if t == n - 1
return ProbablyPrime
t := (t * t) % n
s := s - 1
return DefinitelyComposite
Each a for which the function returns ProbablyPrime is a witness to the primality of n:
function isPrime(n)
for a in [2,3,5,7,11,13,17]
if isStrongPseudoprime(n, a) == DefinitelyComposite
return DefinitelyComposite
return ProbablyPrime
As Fischer noted, for n < 10^14 this is perfectly reliable, according to a paper by Gerhard Jaeschke; if you want to test the primality of larger numbers, choose 25 witnesses a
at random. The powerMod(b,e,m) function returns b ^ e (mod m). If your language doesn't provide that function, you can efficiently calculate it like this:
function powerMod(b, e, m)
x := 1
while e > 0
if e % 2 == 1
x := (b * x) % m
b := (b * b) % m
e := floor(e / 2)
return x
If you're interested in the math behind this test, I modestly recommend the essay Programming with Prime Numbers at my blog.
Loop variable 'x' to increment by 1 until it reaches the value of the number 'num'.
While looping check if num is divisible by x by using modulo. If remainder is 0, stop.
Ex.
mod = 1;
while (mod != 0)
{
mod = num % x;
x++;
}
Tadah! Prime Number checker... Not sure if there's a faster way than this though.
I made a much faster algorithm recently.. it can easily hash out a 14 digit number in a few seconds. Just paste this code into anywhere that accepts javascript code and run it.. Keep in mind that the version of javascript must be recent as of this post as it has to have BigInteger support in order to do these operations. Typically the latest browsers (Chrome, firefox, Safari) will support features like this.. but its anyone's guess if other browsers such as Microsoft IE will support it properly.
--
The algorithm works by using a combination of some of the previously mentioned algorithmic ideas.
However...
This algorithm was actually GENERATED by visualizing sets of prime numbers and multiplying them by various different values and then performing various mod operations to those values and using those numbers to create a 3d representation of all prime numbers which revealed the true patterns that exist within prime number sets.
var prime_nums = [2n,3n,5n,7n,11n,13n,17n,19n,23n,29n,31n,37n,41n,43n,47n,53n,59n,61n,67n,71n,73n,79n,83n,89n,97n,101n,103n,107n,109n,113n,127n,131n,137n,139n,149n,151n,157n,163n,167n,173n,179n,181n,191n,193n,197n,199n,211n,223n,227n,229n,233n,239n,241n,251n,257n,263n,269n,271n,277n,281n,283n,293n,307n,311n,313n,317n,331n,337n,347n,349n,353n,359n,367n,373n,379n,383n,389n,397n,401n,409n,419n,421n,431n,433n,439n,443n,449n,457n,461n,463n,467n,479n,487n,491n];
function isHugePrime(_num) {
var num = BigInt(_num);
var square_nums = [BigInt(9) , BigInt(25) , BigInt(49) ,BigInt(77) , BigInt(1) , BigInt(35) , BigInt(55)];
var z = BigInt(30);
var y = num % z;
var yList = [];
yList.push(num % BigInt(78));
var idx_guess = num / 468n;
var idx_cur = 0;
while ((z * z) < num) {
z += BigInt(468);
var l = prime_nums[prime_nums.length - 1]
while (l < (z / BigInt(3))) {
idx_cur++;
l += BigInt(2);
if (isHugePrime(l)) {
prime_nums.push(l);
}
}
y = num % z;
yList.push(y);
}
for (var i=0; i<yList.length; i++) {
var y2 = yList[i];
y = y2;
if (prime_nums.includes(num)) { return true; }
if ((prime_nums.includes(y)) || (y == BigInt(1)) || (square_nums.includes(y))) {
if ((y != BigInt(1)) && ((num % y) != BigInt(0))) {
for (var t=0; t<prime_nums.length; t++) {
var r = prime_nums[t];
if ((num % r) == BigInt(0)) { return false; }
}
return true;
}
if (y == BigInt(1)) {
var q = BigInt(num);
for (var t=0; t<prime_nums.length; t++) {
var r = prime_nums[t];
if ((q % r) == BigInt(0)) { return false; }
}
return true;
}
}
}
return false;
}

smallest integer not obtainable from {2,3,4,5,6,7,8} (Mathematica)

I'm trying to solve the following problem using Mathematica:
What is the smallest positive integer not obtainable from the set {2,3,4,5,6,7,8} via arithmetic operations {+,-,*,/}, exponentiation, and parentheses. Each number in the set must be used exactly once. Unary operations are NOT allowed (1 cannot be converted to -1 with without using a 0, for example).
For example, the number 1073741824000000000000000 is obtainable via (((3+2)*(5+4))/6)^(8+7).
I am a beginner with Mathematica. I have written code that I believe solves the problems for the set {2,3,4,5,6,7} (I obtained 2249 as my answer), but my code is not efficient enough to work with the set {2,3,4,5,6,7,8}. (My code already takes 71 seconds to run on the set {2,3,4,5,6,7})
I would very much appreciate any tips or solutions to solving this harder problem with Mathematica, or general insights as to how I could speed my existing code.
My existing code uses a brute force, recursive approach:
(* this defines combinations for a set of 1 number as the set of that 1 number *)
combinations[list_ /; Length[list] == 1] := list
(* this tests whether it's ok to exponentiate two numbers including (somewhat) arbitrary restrictions to prevent overflow *)
oktoexponent[number1_, number2_] :=
If[number1 == 0, number2 >= 0,
If[number1 < 0,
(-number1)^number2 < 10000 \[And] IntegerQ[number2],
number1^number2 < 10000 \[And] IntegerQ[number2]]]
(* this takes a list and removes fractions with denominators greater than 100000 *)
cleanup[list_] := Select[list, Denominator[#] < 100000 &]
(* this defines combinations for a set of 2 numbers - and returns a set of all possible numbers obtained via applications of + - * / filtered by oktoexponent and cleanup rules *)
combinations[list_ /; Length[list] == 2 && Depth[list] == 2] :=
cleanup[DeleteCases[#, Null] &#DeleteDuplicates#
{list[[1]] + list[[2]],
list[[1]] - list[[2]],
list[[2]] - list[[1]],
list[[1]]*list[[2]],
If[oktoexponent[list[[1]], list[[2]]], list[[1]]^list[[2]],],
If[oktoexponent[list[[2]], list[[1]]], list[[2]]^list[[1]],],
If[list[[2]] != 0, list[[1]]/list[[2]],],
If[list[[1]] != 0, list[[2]]/list[[1]],]}]
(* this extends combinations to work with sets of sets *)
combinations[
list_ /; Length[list] == 2 && Depth[list] == 3] :=
Module[{m, n, list1, list2},
list1 = list[[1]];
list2 = list[[2]];
m = Length[list1]; n = Length[list2];
cleanup[
DeleteDuplicates#
Flatten#Table[
combinations[{list1[[i]], list2[[j]]}], {i, m}, {j, n}]]]
(* for a given set, partition returns the set of all partitions into two non-empty subsets *)
partition[list_] := Module[{subsets},
subsets = Select[Subsets[list], # != {} && # != list &];
DeleteDuplicates#
Table[Sort#{subsets[[i]], Complement[list, subsets[[i]]]}, {i,
Length[subsets]}]]
(* this finally extends combinations to work with sets of any size *)
combinations[list_ /; Length[list] > 2] :=
Module[{partitions, k},
partitions = partition[list];
k = Length[partitions];
cleanup[Sort#
DeleteDuplicates#
Flatten#(combinations /#
Table[{combinations[partitions[[i]][[1]]],
combinations[partitions[[i]][[2]]]}, {i, k}])]]
Timing[desiredset = combinations[{2, 3, 4, 5, 6, 7}];]
{71.5454, Null}
Complement[
Range[1, 3000], #] &#(Cases[#, x_Integer /; x > 0 && x <= 3000] &#
desiredset)
{2249, 2258, 2327, 2509, 2517, 2654, 2789, 2817, 2841, 2857, 2990, 2998}
This is unhelpful, but I'm under my quota for useless babbling today:
(* it turns out the symbolizing + * is not that useful after all *)
f[x_,y_] = x+y
fm[x_,y_] = x-y
g[x_,y_] = x*y
gd[x_,y_] = x/y
(* power properties *)
h[h[a_,b_],c_] = h[a,b*c]
h[a_/b_,n_] = h[a,n]/h[b,n]
h[1,n_] = 1
(* expand simple powers only! *)
(* does this make things worse? *)
h[a_,2] = a*a
h[a_,3] = a*a*a
(* all symbols for two numbers *)
allsyms[x_,y_] := allsyms[x,y] =
DeleteDuplicates[Flatten[{f[x,y], fm[x,y], fm[y,x],
g[x,y], gd[x,y], gd[y,x], h[x,y], h[y,x]}]]
allsymops[s_,t_] := allsymops[s,t] =
DeleteDuplicates[Flatten[Outer[allsyms[#1,#2]&,s,t]]]
Clear[reach];
reach[{}] = {}
reach[{n_}] := reach[n] = {n}
reach[s_] := reach[s] = DeleteDuplicates[Flatten[
Table[allsymops[reach[i],reach[Complement[s,i]]],
{i,Complement[Subsets[s],{ {},s}]}]]]
The general idea here is to avoid calculating powers (which are
expensive and non-commutative), while at the same time using the
commutativity/associativity of addition/multiplication to reduce the
cardinality of reach[].
Code above also available at:
https://github.com/barrycarter/bcapps/blob/master/playground.m#L20
along with literally gigabytes of other useless code, data, and humor.
I think the answer to your question lays in the command Groupings. This allows you to create a binary tree of a list. The binary tree is very usefull as each of the operations you allow Plus, Subtract, Times, Divide, Power take two arguments. Eg.
In> Groupings[3,2]
Out> {List[List[1,2],3],List[1,List[2,3]]}
Thus all we need to do is replace List with any combination of the allowed operations.
However, Groupings seems to be almighty as it has an option to do this. Imagine you have two functions foo and bar and both take 2 arguments, then you can make all combinations as :
In> Groupings[3,{foo->2,bar->2}]
Out> {foo[foo[1,2],3],foo[1,foo[2,3]],foo[bar[1,2],3],foo[1,bar[2,3]],
bar[foo[1,2],3],bar[1,foo[2,3]],bar[bar[1,2],3],bar[1,bar[2,3]]}
Now it is possible to count the amount of combinations we have :
In> Groupings[Permutations[#],
{Plus->2,Subtract->2,Times->2,Divide->2,Power->2}
] &# {a,b,c,d,e};
In> Length#%
In> DeleteDuplicates#%%
In> Length#%
Out> 1050000
Out> 219352
This means that for 5 distinct numbers, we have 219352 unique combinations.
Sadly, many of these combinations cannot be evaluated due to overflow, division by zero or underflow. However, it is not evident which ones to remove. The value a^(b^(c^(d^e))) could be humongous, or just small. Fractional powers could result in perfect roots and divisions by large numbers can become perfect.
In> Groupings[Permutations[#],
{Plus->2,Subtract->2,Times->2,Divide->2,Power->2}
] &# {2, 3, 4};
In> Union[Cases[%, _?(IntegerQ[#] && # >= 0 &)]];
In> Split[%, #2 - #1 <= 1 &][[1]]
Out> {1, 2, 3, 4, 5, 6}

How to test if a list contains consecutive integers in Mathematica?

I want to test if a list contains consecutive integers.
consQ[a_] := Module[
{ret = True},
Do[If[i > 1 && a[[i]] != a[[i - 1]] + 1, ret = False; Break[]], {i,
1, Length[a]}]; ret]
Although the function consQ does the job, I wonder if there is a better ( shorter, faster ) method of doing this, preferably using functional programming style.
EDIT:
The function above maps lists with consecutive integers in decreasing sequence to False. I would like to change this to True.
Szablics' solution is probably what I'd do, but here's an alternative:
consQ[a : {___Integer}] := Most[a] + 1 === Rest[a]
consQ[_] := False
Note that these approaches differ in how they handle the empty list.
You could use
consQ[a_List ? (VectorQ[#, IntegerQ]&)] := Union#Differences[a] === {1}
consQ[_] = False
You may want to remove the test for integers if you know that every list you pass to it will only have integers.
EDIT: A little extra: if you use a very old version that doesn't have Differences, or wonder how to implement it,
differences[a_List] := Rest[a] - Most[a]
EDIT 2: The requested change:
consQ[a : {Integer___}] := MatchQ[Union#Differences[a], {1} | {-1} | {}]
consQ[_] = False
This works with both increasing and decreasing sequences, and gives True for a list of size 1 or 0 as well.
More generally, you can test if the list of numbers are equally spaced with something like equallySpacedQ[a_List] := Length#Union#Differences[a] == 1
I think the following is also fast, and comparing reversed lists does not take longer:
a = Range[10^7];
f[a_] := Range[Sequence ## ##, Sign[-#[[1]] + #[[2]]]] &#{a[[1]], a[[-1]]} == a;
Timing[f[a]]
b = Reverse#a;
Timing[f[b]]
Edit
A short test for the fastests solutions so far:
a = Range[2 10^7];
Timing#consQSzab#a
Timing#consQBret#a
Timing#consQBeli#a
(*
{6.5,True}
{0.703,True}
{0.203,True}
*)
I like the solutions by the other two, but I'd be concerned about very long lists. Consider the data
d:dat[n_Integer?Positive]:= d = {1}~Join~Range[1, n]
which has its non-sequential point at the very beginning. Setting consQ1 for Brett's and consQ2 for Szabolcs, I get
AbsoluteTiming[ #[dat[ 10000 ] ]& /# {consQ1, consQ2}
{ {0.000110, False}, {0.001091, False} }
Or, roughly a ten times difference between the two, which stays relatively consistent with multiple trials. This time can be cut in roughly half by short-circuiting the process using NestWhile:
Clear[consQ3]
consQ3[a : {__Integer}] :=
Module[{l = Length[a], i = 1},
NestWhile[# + 1 &, i,
(#2 <= l) && a[[#1]] + 1 == a[[#2]] &,
2] > l
]
which tests each pair and only continues if they return true. The timings
AbsoluteTiming[consQ3[dat[ 10000 ]]]
{0.000059, False}
with
{0.000076, False}
for consQ. So, Brett's answer is fairly close, but occasionally, it will take twice as long.
Edit: I moved the graphs of the timing data to a Community Wiki answer.
Fold can be used in a fairly concise expression that runs very quickly:
consQFold[a_] := (Fold[If[#2 == #1 + 1, #2, Return[False]] &, a[[1]]-1, a]; True)
Pattern-matching can be used to provide a very clear expression of intent at the cost of substantially slower performance:
consQMatch[{___, i_, j_, ___}] /; j - i != 1 := False
consQMatch[_] = True;
Edit
consQFold, as written, works in Mathematica v8.0.4 but not in earlier versions of v8 or v7. To correct this problem, there are a couple of alternatives. The first is to explicitly name the Return point:
consQFold[a_] :=
(Fold[If[#2==#1+1, #2, Return[False,CompoundExpression]] &, a[[1]]-1, a]; True)
The second, as suggested by #Mr.Wizard, is to replace Return with Throw / Catch:
consQFold[a_] :=
Catch[Fold[If[#2 == #1 + 1, #2, Throw[False]]&, a[[1]]-1, a]; True]
Since the timing seems to be rather important. I've moved the comparisons between the various methods to this, Community Wiki, answer.
The data used are simply lists of consecutive integers, with a single non-consecutive point, and they're generated via
d : dat[n_Integer?Positive] := (d = {1}~Join~Range[1, n])
d : dat[n_Integer?Positive, p_Integer?Positive] /; p <= n :=
Range[1, p]~Join~{p}~Join~Range[p + 1, n]
where the first form of dat[n] is equivalent to dat[n, 1]. The timing code is simple:
Clear[consQTiming]
Options[consQTiming] = {
NonConsecutivePoints -> {10, 25, 50, 100, 250,500, 1000}};
consQTiming[fcns__, OptionPattern[]]:=
With[{rnd = RandomInteger[{1, #}, 100]},
With[{fcn = #},
Timing[ fcn[dat[10000, #]] & /# rnd ][[1]]/100
] & /# {fcns}
] & /# OptionValue[NonConsecutivePoints]
It generates 100 random integers between 1 and each of {10, 25, 50, 100, 250, 500, 1000} and dat then uses each of those random numbers as the non-consecutive point in a list 10,000 elements long. Each consQ implementation is then applied to each list produced by dat, and the results are averaged. The plotting function is simply
Clear[PlotConsQTimings]
Options[PlotConsQTimings] = {
NonConsecutivePoints -> {10, 25, 50, 100, 250, 500, 1000}};
PlotConsQTimings[timings : { _?VectorQ ..}, OptionPattern[]] :=
ListLogLogPlot[
Thread[{OptionValue[NonConsecutivePoints], #}] & /# Transpose[timings],
Frame -> True, Joined -> True, PlotMarkers -> Automatic
]
I timed the following functions consQSzabolcs1, consQSzabolcs2, consQBrett, consQRCollyer, consQBelisarius, consQWRFold, consQWRFold2, consQWRFold3, consQWRMatch, and MrWizard's version of consQBelisarius.
In ascending order of the left most timing: consQBelisarius, consQWizard, consQRCollyer, consQBrett, consQSzabolcs1, consQWRMatch, consQSzabolcs2, consQWRFold2, consQWRFold3,and consQWRFold.
Edit: Reran all of the functions with timeAvg (the second one) instead of Timing in consQTiming. I did still average over 100 runs, though. For the most part, there was any change, except for the lowest two where there is some variation from run to run. So, take those two lines with a grain of salt as they're timings are practically identical.
I am now convinced that belisarius is trying to get my goat by writing intentionally convoluted code. :-p
I would write: f = Range[##, Sign[#2 - #]]& ## #[[{1, -1}]] == # &
Also, I believe that WReach probably intended to write something like:
consQFold[a_] :=
Catch[
Fold[If[#2 === # + 1, #2, Throw#False] &, a[[1]] - 1, a];
True
]

Resources