in the following snippet, if you replace Do by ParallelDo, it will evaluate by a factor of 3 SLOWER, because now the loop will be broken in only ONE of the two kernels.
ParallelEvaluate[NN = 10070];
SetSharedVariable[res]
Module[{a, b, c},
Do[
c = NN - a - b;
If[a a + b b == c c, res = a b c; Break[]]
, {a, 1, NN}, {b, a + 1, NN}
];
res
] // AbsoluteTiming
Calling ParallelAbort would solve the issue, but it's forbidden. What else is there?
You need to have a way for each iteration to tell
all other iterations that the answer has been found.
I modelled this with a "quit" flag, intially set
to false, that is set to true when any iteration
decides to finish. Each iteration likewise has
to check the exit condition.
My Mathematica is 15 years rusty, and I haven't
seen the Parallelxxx forms before, but a good guess
at how the loop should change is the following
variation on your code:
ParallelEvaluate[NN = 10070];
SetSharedVariable[res,quit]
Module[{a, b, c},
quit=false;
Do[ c = NN - a - b;
If[quit, Break[]];
If[ a a + b b == c c, quit=true; res = a b c; Break[]],
{a, 1, NN}, {b, a + 1, NN}
];
res
] // AbsoluteTiming
The extra If slows down the loop somewhat, but thats the price of
synchronization.
I suspect that the amount
of work you are doing in each iteration is already pretty small
compared to the cost of executing each iteration in parallel,
so this loop is probably inefficient and you may not get
any real value from the Do Parallel.
If you dont, then you can make each Do iteration operate on several values
of a and b (e.g., use {a, 1, NN, 10} and similarly for b for each
iteration and handle the 10-wide subrange as a subloop inside
each parallel iteration).to keep the quit-test exit overhead small in comparison
to the work done in each loop body.
Recode exercise left for the reader.
Your code has another problem: there's a race condition in setting
res. Under ceratin circumstances, two iterations could both decide to set res.
If you don't care which answer is produced, and the store to res is "atomic",
this is fine. If res were a more complicated data structure, and updating
it took multiple Mathematica statements, you'd surely have a data race
and your loop would produce bad results once in a great while and it
would be very hard to debug. You ideally need some kind of atomic
test to protect the exit condition. I don't know what that is in MMa,
so you'll have to look it up, but I imagine an "atomic[...]" form
that insists its body is executged by only one of the many parallel threads.
(Perhaps MMa has a semaphore that you can use to implement atomic].
If so, your code should then look like this:
ParallelEvaluate[NN = 10070];
SetSharedVariable[res,quit]
Module[{a, b, c},
quit=false;
Do[ c = NN - a - b;
If[quit, Break[]];
If[ a a + b b == c c,
atomic[If[not[quit]; quit=true; res = a b c;]; Break[]],
{a, 1, NN}, {b, a + 1, NN}
];
res
] // AbsoluteTiming
Related
I need to find a square matrix A satisfying the equation
A.L.A = -17/18A -2(A.L.L + L.A.L + (L.L).A) + 3(A.L + L.A) -4L.L.L + 8L.L - 44/9L + 8/9*(ID)
,where L is a diagonal matrix L = {{2/3,0,0,0},{0,5/12,0,0},{0,0,11/12,0},{0,0,0,2/3}}.
I can find the answers in the case that A is of dimension 2 and 3, but there is a problem with dimension 4 and above.
Actually, the matrix A has to satisfy the equation A.A = A too, but with a suitable matrix L only the equation above equation is enough.
This is my code ;
A = Table[a[i,j],{i,1,4},{j,1,4}]
B = A.L.A
ID = IdentityMatrix[4]
M = -17/18A -2(A.L.L + L.A.L + (L.L).A) + 3(A.L + L.A) -4L.L.L + 8L.L - 44/9L + 8/9*(ID)
diff = (B - M)//ExpandAll//Flatten ( so I get 16 non linear system of equations here )
A1 = A/.Solve[diff == 0][[1]]
After running this code for quite sometime, the error come up with there is not enough memory to compute.
In this case there are 16 equations and 16 variables. Some of the entries are parameters but I just do not know which one until I get the result.
I am not sure if there is anyway to solve this problem. I need the answer to be rational(probably integers) which is possible theoretically.
Could this problem be solved by matrix equation or any other method? I see one problem for this is there are too many equations and variables.
This evaluates fairly quickly and with modest memory for a problem this size.
L = {{2/3, 0, 0, 0}, {0, 5/12, 0, 0}, {0, 0, 11/12, 0}, {0, 0, 0, 2/3}};
A = {{a, b, c, d}, {e, f, g, h}, {i, j, k, l}, {m, n, o, p}};
Reduce[{A.L.A == -17/18 A - 2 (A.L.L + L.A.L + (L.L).A) + 3 (A.L + L.A) -
4 L.L.L + 8 L.L - 44/9 L + 8/9*IdentityMatrix[4]},
{a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p}, Backsubstitution->True
]
Then you just have to sort through the 143 potential solutions that it returns.
You might be able to Select from those that satisfy your A.A==A. You can also use ToRules on the result returned from Reduce to put this into a form similar to that returned from Solve, but check this carefully to make certain it is doing what you expect.
Check this very carefully to make certain I haven't made any mistakes.
I aim to calculate and preserve the results from the maximization of a function with two arguments and one exogenous parameter, when the maximum can not be derived (in closed form) by maximize. For instance, let
f[x_,y_,a_]=Max[0,Min[a-y,1-x-y]
be the objective function where a is positive. The maximization shall take place over [0,1]^2, therefore I set
m[a_]=Maximize[{f[x, y, a], 0 <= x <= 1 && 0 <= y <= 1 && 0 <= a}, {x,y}]
Obviously m can be evaluated at any point a and it is therefore possible to plot the maximizing x by employing
Plot[x /. m[a][[2]], {a, 0.01, 1}]
As I need to do several plots and further derivations containing the optimal solutions x and y (which of course are functions of a), i would like to preserve/save the results from the optimization for further use. Is there an elegant way to do this, or do I have to write some kind of loop to extract the values myself?
Now that I've seen the full text of your comment on my original comment, I suspect that you do understand the differences between Set and SetDelayed well enough. I think what you may be looking for is memoisation, sometimes implemented a bit like this;
f[x_,y_] := f[x,y] = Max[0,Min[a-y,1-x-y]]
When you evaluate, for example f[3,4] for the first time it will evaluate to the entire expression to the right of the :=. The rhs is the assignment f[3,4] = Max[0,Min[a-y,1-x-y]]. Next time you evaluate f[3,4] Mathematica already has a value for it so doesn't need to recompute it, it just recalls it. In this example the stored value would be Max[0,Min[a-4,-6]] of course.
I remain a little uncertain of what you are trying to do so this answer may not be any use to you at all.
Simple approach
results = Table[{x, y, a} /. m[a][[2]], {a, 0.01, 1, .01}]
ListPlot[{#[[3]], #[[1]]} & /# results, Joined -> True]
(The Set = is ok here so long as 'a' is not previosly defined )
If you want to utilise Plot[]s automatic evaluation take a look at Reap[]/Sow[]
{p, data} = Reap[Plot[x /. Sow[m[a]][[2]], {a, 0.01, 1}]];
Show[p]
(this takes a few minutes as the function output is a mess..).
hmm try this again: assuming you want x,y,a and the minimum value:
{p, data} = Reap[Plot[x /. Sow[{a, m[a]}][[2, 2]], {a, 0.01, .1}]];
Show[p]
results = {#[[1]], x /. #[[2, 2]], y /. #[[2, 2]], #[[2, 1]]} & /# data[[1]]
BTW Your function appears to be independent of x over some ranges which is why the plot is a mess..
I am trying to get Mathematica to approximate an integral that is a function of various parameters. I don't need it to be extremely precise -- the answer will be a fraction, and 5 digits would be nice, but I'd settle for as few as 2.
The problem is that there is a symbolic integral buried in the main integral, and I can't use NIntegrate on it since its symbolic.
F[x_, c_] := (1 - (1 - x)^c)^c;
a[n_, c_, x_] := F[a[n - 1, c, x], c];
a[0, c_, x_] = x;
MyIntegral[n_,c_] :=
NIntegrate[Integrate[(D[a[n,c,y],y]*y)/(1-a[n,c,x]),{y,x,1}],{x,0,1}]
Mathematica starts hanging when n is greater than 2 and c is greater than 3 or so (generally as both n and c get a little higher).
Are there any tricks for rewriting this expression so that it can be evaluated more easily? I've played with different WorkingPrecision and AccuracyGoal and PrecisionGoal options on the outer NIntegrate, but none of that helps the inner integral, which is where the problem is. In fact, for the higher values of n and c, I can't even get Mathematica to expand the inner derivative, i.e.
Expand[D[a[4,6,y],y]]
hangs.
I am using Mathematica 8 for Students.
If anyone has any tips for how I can get M. to approximate this, I would appreciate it.
Since you only want a numerical output (or that's what you'll get anyway), you can convert the symbolic integration into a numerical one using just NIntegrate as follows:
Clear[a,myIntegral]
a[n_Integer?Positive, c_Integer?Positive, x_] :=
a[n, c, x] = (1 - (1 - a[n - 1, c, x])^c)^c;
a[0, c_Integer, x_] = x;
myIntegral[n_, c_] :=
NIntegrate[D[a[n, c, y], y]*y/(1 - a[n, c, x]), {x, 0, 1}, {y, x, 1},
WorkingPrecision -> 200, PrecisionGoal -> 5]
This is much faster than performing the integration symbolically. Here's a comparison:
yoda:
myIntegral[2,2]//Timing
Out[1]= {0.088441, 0.647376595...}
myIntegral[5,2]//Timing
Out[2]= {1.10486, 0.587502888...}
rcollyer:
MyIntegral[2,2]//Timing
Out[3]= {1.0029, 0.647376}
MyIntegral[5,2]//Timing
Out[4]= {27.1697, 0.587503006...}
(* Obtained with WorkingPrecision->500, PrecisionGoal->5, MaxRecursion->20 *)
Jand's function has timings similar to rcollyer's. Of course, as you increase n, you will have to increase your WorkingPrecision way higher than this, as you've experienced in your previous question. Since you said you only need about 5 digits of precision, I've explicitly set PrecisionGoal to 5. You can change this as per your needs.
To codify the comments, I'd try the following. First, to eliminate infinite recursion with regards to the variable, n, I'd rewrite your functions as
F[x_, c_] := (1 - (1-x)^c)^c;
(* see note below *)
a[n_Integer?Positive, c_, x_] := F[a[n - 1, c, x], c];
a[0, c_, x_] = x;
that way n==0 will actually be a stopping point. The ?Positive form is a PatternTest, and useful for applying additional conditions to the parameters. I suspect the issue is that NIntegrate is re-evaluating the inner Integrate for every value of x, so I'd pull that evaluation out, like
MyIntegral[n_,c_] :=
With[{ int = Integrate[(D[a[n,c,y],y]*y)/(1-a[n,c,x]),{y,x,1}] },
NIntegrate[int,{x,0,1}]
]
where With is one of several scoping constructs specifically for creating local constants.
Your comments indicate that the inner integral takes a long time, have you tried simplifying the integrand as it is a derivative of a times a function of a? It seems like the result of a chain rule expansion to me.
Note: as per Yoda's suggestion in the comments, you can add a cacheing, or memoization, mechanism to a. Change its definition to
d:a[n_Integer?Positive, c_, x_] := d = F[a[n - 1, c, x], c];
The trick here is that in d:a[ ... ], d is a named pattern that is used again in d = F[...] cacheing the value of a for those particular parameter values.
Many algorithms (like the algorithm for finding the next permutation of a list in lexicographical order) involve finding the index of the last element in a list. However, I haven't been able to find a way to do this in Mathematica that isn't awkward. The most straightforward approach uses LengthWhile, but it means reversing the whole list, which is likely to be inefficient in cases where you know the element you want is near the end of the list and reversing the sense of the predicate:
findLastLengthWhile[list_, predicate_] :=
(Length#list - LengthWhile[Reverse#list, ! predicate## &]) /. (0 -> $Failed)
We could do an explicit, imperative loop with Do, but that winds up being a bit clunky, too. It would help if Return would actually return from a function instead of the Do block, but it doesn't, so you might as well use Break:
findLastDo[list_, pred_] :=
Module[{k, result = $Failed},
Do[
If[pred#list[[k]], result = k; Break[]],
{k, Length#list, 1, -1}];
result]
Ultimately, I decided to iterate using tail-recursion, which means early termination is a little easier. Using the weird but useful #0 notation that lets anonymous functions call themselves, this becomes:
findLastRecursive[list_, pred_] :=
With[{
step =
Which[
#1 == 0, $Failed,
pred#list[[#1]], #1,
True, #0[#1 - 1]] &},
step[Length#list]]
All of this seems too hard, though. Does anyone see a better way?
EDIT to add: Of course, my preferred solution has a bug which means it's broken on long lists because of $IterationLimit.
In[107]:= findLastRecursive[Range[10000], # > 10000 &]
$IterationLimit::itlim: Iteration limit of 4096 exceeded.
Out[107]= (* gack omitted *)
You can fix this with Block:
findLastRecursive[list_, pred_] :=
Block[{$IterationLimit = Infinity},
With[{
step =
Which[
#1 == 0, $Failed,
pred#list[[#1]], #1,
True, #0[#1 - 1]] &},
step[Length#list]]]
$IterationLimit is not my favorite Mathematica feature.
Not really an answer, just a couple of variants on findLastDo.
(1) Actually Return can take an undocumented second argument telling what to return from.
In[74]:= findLastDo2[list_, pred_] :=
Module[{k, result = $Failed},
Do[If[pred#list[[k]], Return[k, Module]], {k, Length#list, 1, -1}];
result]
In[75]:= findLastDo2[Range[25], # <= 22 &]
Out[75]= 22
(2) Better is to use Catch[...Throw...]
In[76]:= findLastDo3[list_, pred_] :=
Catch[Module[{k, result = $Failed},
Do[If[pred#list[[k]], Throw[k]], {k, Length#list, 1, -1}];
result]]
In[77]:= findLastDo3[Range[25], # <= 22 &]
Out[77]= 22
Daniel Lichtblau
For the adventurous...
The following definitions define a wrapper expression reversed[...] that masquerades as a list object whose contents appear to be a reversed version of the wrapped list:
reversed[list_][[i_]] ^:= list[[-i]]
Take[reversed[list_], i_] ^:= Take[list, -i]
Length[reversed[list_]] ^:= Length[list]
Head[reversed[list_]] ^:= List
Sample use:
$list = Range[1000000];
Timing[LengthWhile[reversed[$list], # > 499500 &]]
(* {1.248, 500500} *)
Note that this method is slower than actually reversing the list...
Timing[LengthWhile[Reverse[$list], # > 499500 &]]
(* 0.468, 500500 *)
... but of course it uses much less memory.
I would not recommend this technique for general use as flaws in the masquerade can manifest themselves as subtle bugs. Consider: what other functions need to implemented to make the simulation perfect? The exhibited wrapper definitions are apparently good enough to fool LengthWhile and TakeWhile for simple cases, but other functions (particularly kernel built-ins) may not be so easily fooled. Overriding Head seems particularly fraught with peril.
Notwithstanding these drawbacks, this impersonation technique can sometimes be useful in controlled circumstances.
Personally, I don't see anything wrong with LengthWhile-based solution. Also, if we want to reuse mma built-in list-traversing functions (as opposed to explicit loops or recursion), I don't see a way to avoid reverting the list. Here is a version that does that, but does not reverse the predicate:
Clear[findLastLengthWhile];
findLastLengthWhile[{}, _] = 0;
findLastLengthWhile[list_, predicate_] /; predicate[Last[list]] := Length[list];
findLastLengthWhile[list_, predicate_] :=
Module[{l = Length[list]},
Scan[If[predicate[#], Return[], l--] &, Reverse[list]]; l];
Whether or not it is simpler I don't know. It is certainly less efficient than the one based on LengthWhile, particularly for packed arrays. Also, I use the convention of returning 0 when no element satisfying a condition is found, rather than $Failed, but this is just a personal preference.
EDIT
Here is a recursive version based on linked lists, which is somewhat more efficient:
ClearAll[linkedList, toLinkedList];
SetAttributes[linkedList, HoldAllComplete];
toLinkedList[data_List] := Fold[linkedList, linkedList[], data];
Clear[findLastRec];
findLastRec[list_, pred_] :=
Block[{$IterationLimit = Infinity},
Module[{ll = toLinkedList[list], findLR},
findLR[linkedList[]] := 0;
findLR[linkedList[_, el_?pred], n_] := n;
findLR[linkedList[ll_, _], n_] := findLR[ll, n - 1];
findLR[ll, Length[list]]]]
Some benchmarks:
In[48]:= findLastRecursive[Range[300000],#<9000&]//Timing
Out[48]= {0.734,8999}
In[49]:= findLastRec[Range[300000],#<9000&]//Timing
Out[49]= {0.547,8999}
EDIT 2
If your list can be made a packed array (of whatever dimensions), then you can exploit compilation to C for loop-based solutions. To avoid the compilation overhead, you can memoize the compiled function, like so:
Clear[findLastLW];
findLastLW[predicate_, signature_] := findLastLW[predicate, Verbatim[signature]] =
Block[{list},
With[{sig = List#Prepend[signature, list]},
Compile ## Hold[
sig,
Module[{k, result = 0},
Do[
If[predicate#list[[k]], result = k; Break[]],
{k, Length#list, 1, -1}
];
result],
CompilationTarget -> "C"]]]
The Verbatim part is necessary since in typical signatures like {_Integer,1}, _Integer will otherwise be interpreted as a pattern and the memoized definition won't match. Here is an example:
In[60]:=
fn = findLastLW[#<9000&,{_Integer,1}];
fn[Range[300000]]//Timing
Out[61]= {0.016,8999}
EDIT 3
Here is a much more compact and faster version of recursive solution based on linked lists:
Clear[findLastRecAlt];
findLastRecAlt[{}, _] = 0;
findLastRecAlt[list_, pred_] :=
Module[{lls, tag},
Block[{$IterationLimit = Infinity, linkedList},
SetAttributes[linkedList, HoldAllComplete];
lls = Fold[linkedList, linkedList[], list];
ll : linkedList[_, el_?pred] := Throw[Depth[Unevaluated[ll]] - 2, tag];
linkedList[ll_, _] := ll;
Catch[lls, tag]/. linkedList[] :> 0]]
It is as fast as versions based on Do - loops, and twice faster than the original findLastRecursive (the relevant benchmark to be added soon - I can not do consistent (with previous) benchmarks being on a different machine at the moment). I think this is a good illustration of the fact that tail-recursive solutions in mma can be as efficient as procedural (uncompiled) ones.
Here are some alternatives, two of which don't reverse the list:
findLastLengthWhile2[list_, predicate_] :=
Length[list]-(Position[list//Reverse, _?(!predicate[#] &),1,1]/.{}->{{0}})[[1, 1]]+1
findLastLengthWhile3[list_, predicate_] :=
Module[{lw = 0},
Scan[If[predicate[#], lw++, lw = 0] &, list];
Length[list] - lw
]
findLastLengthWhile4[list_, predicate_] :=
Module[{a}, a = Split[list, predicate];
Length[list] - If[predicate[a[[-1, 1]]], Length[a[[-1]]], 0]
]
Some timings (number 1 is Pillsy's first one) of finding the last run of 1's in an array of 100,000 1's in which a single zero is placed on various positions. Timings are the mean of 10 repeated meusurements:
Code used for timings:
Monitor[
timings = Table[
ri = ConstantArray[1, {100000}];
ri[[daZero]] = 0;
t1 = (a1 = findLastLengthWhile[ri, # == 1 &];) // Timing // First;
t2 = (a2 = findLastLengthWhile2[ri, # == 1 &];) // Timing // First;
t3 = (a3 = findLastLengthWhile3[ri, # == 1 &];) // Timing // First;
t4 = (a4 = findLastLengthWhile4[ri, # == 1 &];) // Timing // First;
{t1, t2, t3, t4},
{daZero, {1000, 10000, 20000, 50000, 80000, 90000, 99000}}, {10}
], {daZero}
]
ListLinePlot[
Transpose[{{1000, 10000, 20000, 50000, 80000, 90000,99000}, #}] & /#
(Mean /# timings // Transpose),
Mesh -> All, Frame -> True, FrameLabel -> {"Zero position", "Time (s)", "", ""},
BaseStyle -> {FontFamily -> "Arial", FontWeight -> Bold,
FontSize -> 14}, ImageSize -> 500
]
Timing Reverse for Strings and Reals
a = DictionaryLookup[__];
b = RandomReal[1, 10^6];
Timing[Short#Reverse##] & /# {a, b}
(*
->
{{0.016, {Zyuganov,Zyrtec,zymurgy,zygotic,zygotes,...}},
{3.40006*10^-15,{0.693684,0.327367,<<999997>>,0.414146}}}
*)
An elegant solution would be:
findLastPatternMatching[{Longest[start___], f_, ___}, f_] := Length[{start}]+1
(* match this pattern if item not in list *)
findLastPatternMatching[_, _] := -1
but as it's based on pattern matching, it's way slower than the other solutions suggested.
I have conditional logic that requires pre-processing that is common to each of the conditions (instantiating objects, database lookups etc). I can think of 3 possible ways to do this, but each has a flaw:
Option 1
if A
prepare processing
do A logic
else if B
prepare processing
do B logic
else if C
prepare processing
do C logic
// else do nothing
end
The flaw with option 1 is that the expensive code is redundant.
Option 2
prepare processing // not necessary unless A, B, or C
if A
do A logic
else if B
do B logic
else if C
do C logic
// else do nothing
end
The flaw with option 2 is that the expensive code runs even when neither A, B or C is true
Option 3
if (A, B, or C)
prepare processing
end
if A
do A logic
else if B
do B logic
else if C
do C logic
end
The flaw with option 3 is that the conditions for A, B, C are being evaluated twice. The evaluation is also costly.
Now that I think about it, there is a variant of option 3 that I call option 4:
Option 4
if (A, B, or C)
prepare processing
if A
set D
else if B
set E
else if C
set F
end
end
if D
do A logic
else if E
do B logic
else if F
do C logic
end
While this does address the costly evaluations of A, B, and C, it makes the whole thing more ugly and I don't like it.
How would you rank the options, and are there any others that I am not seeing?
Can't you do
if (A, B, or C)
prepare processing
if A
do A logic
else if B
do B logic
else if C
do C logic
end
? Maybe I misunderstood.
Edit: zzz, your edits messed me up. If you don't want it to evaluate A,B,C twice then do
x = func returnCase() //returns a,b, or c
if x != None
prepare processing
do Case
Doesn't this solve the redundancy:
if A
prepareprocessingfunction()
do A logic
else if B
prepareprocessingfunction()
do B logic
else if C
prepareprocessingfunction()
do C logic
// else do nothing
end
prepareprocessingfunction() {
prepare processing
}