Related
As my assignment I'm creating a sudoku solver in Erlang using concurrency. My initial idea was to use backtracking algorithm which would spawn new threads whenever it's making a choice.
However, after putting some time and thought into the project I'm starting to think that the way I wanted to solve this is a bit too complicated. Has anyone done something similiar in the past? Would you recommend some different algorithm that would work better with Erlang and concurrency?
BackTracking algorithm is not really adapted to use concurrency. Of course, it is always possible to spawn several processes in parallel that start with different initial conditions (all possible values of the first or 2 first cells to solve). But I don't think this is a true concurent application.
A more suitable algorithm is the propagation of constraints. The idea is to create a process per cell, each cell knowing the 20 "connected cells" processes (8 in the same column, 8 in the same line, and 4 more in the same square). A cell has a state which contain - at least - all the possible values it can take. If a cell has only one single possible value remaining, after initialisation or during the propagation of constraint, it sends a message {remove,Value} to all its connected cells to inform them to remove the value from their list.
It is a true concurrent process, but it has (at least) 2 issues:
- knowing when a solution is found or when the propagation is stuck;
- Only the simplest puzzles will be solved by this algorithm in one shot.
There are some other rules that could be used to solve more complex puzzles. For example look for numbers that have only one remaining possibility, looking for pairs... But these rules are not really easy to implement in parallel, and I don't know the set of rules necessary to solve any puzzle.
Note In the general case the set of rule does not exist since a puzzle may have multiple solutions, although it is not the case for the puzzles we can find in newspapers.
My idea is to complete the constraint propagation algorithm with a search algorithm. A new process, the controller, is in charge to:
- initialize the puzzle
- select the most promissing trial
- ask to make the trial to the cell processes,
- control the end of the propagation process,
- check if it is
- a solution -> print it
- a dead end -> ask to come back to previous state, remove the initial trial number from the list, and select the next most promising trial
- ask to store the current result state and continue to the next trial
So the cells have to complete their state with a stack where they can push and pop their current list of possible value.
The most promising trial can be the selected this way: find the cell with the less remaining possible values, and take the first one.
The next problem is to synchronize everything. The first "simple" solution is to use a timeout. But as always, a timeout is very difficult to define, and at the end very inefficient. I would keep a timeout only for debug purpose, so with a rather big value, because there are some risks that it doesn't work at the first attempt :o).
An alternative to the timeout is to use a counter. Each time the controller sends a message that need synchronization, it increments its counter. Each time a cell has complete the handling of a message that needs synchronization, it returns an {ack_synchro,N} message to the controller, which in turn subtracts N to its counter. Doing this, during the propagation of constraint, when a cell has only one remaining possible value, it can send an {ack_synchro,-20} to the controller before sending the {remove,Value} to its connected cells so the controller "knows" it has to wait for 20 messages more. With this principle, it is possible to synchronize the activity of the cells for the push, pop, {try,Value}, {remove,Value} messages.
I guess it is missing a lot of details, and I am not sure that it will be faster than the imperative backtracking, but it should work at a reasonable coding cost, and it is concurrent.
Edit
I coded this proposal, it is tested with only 2 test cases, one simple puzzle, one complex, and it works fine. Here is the code:
The main module (actually it runs in the shell process) to solve a puzzle use the command: sudo:start(file,Filename) or sudo:start(table,InitialList):
-module (sudo).
-export ([start/2]).
start(file,File) ->
{ok,[Table]} = file:consult(File),
start(table,Table);
start(table,Table) ->
init_cells(),
solve(Table),
print_result(),
stop().
stop() ->
lists:foreach(fun(X) -> cell:stop(X) end ,cells()).
cells() ->
[a1,a2,a3,a4,a5,a6,a7,a8,a9,
b1,b2,b3,b4,b5,b6,b7,b8,b9,
c1,c2,c3,c4,c5,c6,c7,c8,c9,
d1,d2,d3,d4,d5,d6,d7,d8,d9,
e1,e2,e3,e4,e5,e6,e7,e8,e9,
f1,f2,f3,f4,f5,f6,f7,f8,f9,
g1,g2,g3,g4,g5,g6,g7,g8,g9,
h1,h2,h3,h4,h5,h6,h7,h8,h9,
i1,i2,i3,i4,i5,i6,i7,i8,i9].
init_cells() ->
lists:foreach(fun(X) -> cell:start_link(X) end ,cells()),
Zip = lists:zip(cells(),lists:seq(0,80)),
lists:foreach(fun({N,P}) -> cell:init(N,neighbors(P,Zip)) end, Zip),
wait(81).
neighbors(P,Zip) ->
Line = fun(X) -> X div 9 end,
Col = fun(X) -> X rem 9 end,
Square = fun(X) -> {Line(X) div 3, Col(X) div 3} end,
Linked = fun(X) -> (X =/= P) andalso
( (Line(X) == Line(P)) orelse
(Col(X) == Col(P)) orelse
(Square(X) == Square(P))) end,
[Name || {Name,Pos} <- Zip, Linked(Pos)].
solve(Table) ->
Zip = lists:zip(cells(),Table),
test(Zip),
do_solve(is_solved()).
do_solve({true,_,_,_}) ->
done;
do_solve({false,Name,Value,_}) ->
push(),
test(Name,Value),
do_solve(is_solved());
do_solve(error) ->
pop(),
{false,Name,Value,_} = is_solved(),
remove(Name,Value),
do_solve(is_solved()).
print_result() ->
R = get_cells(),
F = fun({_,[I]},Acc) ->
case Acc of
_ when (Acc rem 27) == 0 -> io:format("~n~n ~p",[I]);
_ when (Acc rem 9) == 0 -> io:format("~n ~p",[I]);
_ when (Acc rem 3) == 0 -> io:format(" ~p",[I]);
_ -> io:format(" ~p",[I])
end,
Acc+1
end,
lists:foldl(F,0,R),
io:format("~n").
test(List) ->
F = fun({_,0},Acc) ->
Acc;
({Name,Value},Acc) ->
cell:test(Name,Value),
Acc+1
end,
NbMessages = lists:foldl(F,0,List),
wait(NbMessages).
test(_,0) -> ok;
test(Name,Value) ->
cell:test(Name,Value),
wait(1).
remove(Name,Value) ->
cell:remove(Name,Value),
wait(1).
push() ->
lists:foreach(fun(X) -> cell:push(X) end, cells()),
wait(81).
pop() ->
lists:foreach(fun(X) -> cell:pop(X) end, cells()),
wait(81).
wait(0) ->
done;
wait(NbMessages) ->
receive
{done,N} -> wait(NbMessages-N);
{add,N} -> wait(NbMessages+N)
after 2000 ->
error
end.
get_cells() ->
F = fun(X) -> cell:get_val(X), receive {possible,M} -> M end, {X,M} end,
[F(X) || X <- cells()].
is_solved() ->
State = get_cells(),
F = fun({_,[]},_) -> error;
(_,error) -> error;
({Name,List},Acc = {_,_CurName,_CurVal,Length}) ->
NL = length(List),
case (NL > 1) andalso( NL < Length) of
true -> {false,Name,hd(List),NL};
false -> Acc
end
end,
lists:foldl(F,{true,none,0,10},State).
The Cell server and its interfaces
-module (cell).
-export ([start_link/1,init/2,push/1,pop/1,test/2,remove/2,stop/1,get_val/1]).
% Interfaces
start_link(Name) ->
Pid = spawn_link(fun() -> init() end),
register(Name,Pid).
init(Name,List) ->
Name ! {init,self(),List}.
push(Name) ->
Name ! push.
pop(Name) ->
Name ! pop.
test(Name,Value) ->
Name ! {test,Value}.
remove(Name,Value) ->
Name ! {remove,Value}.
get_val(Name) ->
Name ! get.
stop(Name) ->
Name ! stop.
% private
init() ->
loop(none,[],[],[]).
loop(Report,Possible,Stack,Neighbors) ->
receive
{init,R,List} ->
R ! {done,1},
loop(R,lists:seq(1,9),[],List);
push ->
Report ! {done,1},
loop(Report,Possible,[Possible|Stack],Neighbors);
pop ->
Report ! {done,1},
loop(Report,hd(Stack),tl(Stack),Neighbors);
{test,Value} ->
NewP = test(Report,Possible,Neighbors,Value),
loop(Report,NewP,Stack,Neighbors);
{remove,Value} ->
NewP = remove(Report,Possible,Neighbors,Value),
loop(Report,NewP,Stack,Neighbors);
get ->
Report ! {possible,Possible},
loop(Report,Possible,Stack,Neighbors);
stop ->
ok
end.
test(Report,Possible,Neighbors,Value) ->
true = lists:member(Value,Possible),
Report ! {add,20},
lists:foreach(fun(X) -> remove(X,Value) end, Neighbors),
Report ! {done,1},
[Value].
remove(Report,Possible,Neighbors,Value) ->
case Possible of
[Value,B] ->
remove(Report,B,Neighbors);
[A,Value] ->
remove(Report,A,Neighbors);
_ ->
Report ! {done,1}
end,
lists:delete(Value,Possible).
remove(Report,Value,Neighbors) ->
Report ! {add,20},
lists:foreach(fun(X) -> remove(X,Value) end, Neighbors),
Report ! {done,1}.
a test file:
[
0,0,0,4,0,6,9,0,0,
0,0,0,0,0,0,1,0,0,
0,0,0,3,0,0,0,7,2,
0,0,5,6,4,0,0,0,0,
0,2,3,0,8,0,0,0,1,
0,8,0,0,0,2,4,0,5,
0,7,8,0,0,0,5,0,0,
6,0,1,0,0,7,2,0,0,
0,0,2,0,0,9,0,0,0
].
in action:
1> c(sudo).
{ok,sudo}
2> c(cell).
{ok,cell}
3> timer:tc(sudo,start,[file,"test_hard.txt"]).
1 3 7 4 2 6 9 5 8
2 6 9 7 5 8 1 4 3
8 5 4 3 9 1 6 7 2
7 1 5 6 4 3 8 2 9
4 2 3 9 8 5 7 6 1
9 8 6 1 7 2 4 3 5
3 7 8 2 1 4 5 9 6
6 9 1 5 3 7 2 8 4
5 4 2 8 6 9 3 1 7
{16000,ok}
4>
No comments in the code, but it does exactly what I propose in the first part of the answer.
if you install wx , just run sudoku:go(). https://github.com/erlang/otp/blob/86d1fb0865193cce4e308baa6472885a81033f10/lib/wx/examples/sudoku/sudoku.erl
or see this project:
https://github.com/apauley/sudoku-in-erlang
I'm trying to write a short piece of code that will perform propagation of errors. So far, I can get Mathematica to generate the formula for the error delta_f in a function f(x1,x2,...,xi,...,xn) with errors dx1,dx2,...,dxi,...dxn:
fError[f_, xi__, dxi__] :=
Sum[(D[f[xi], xi[[i]]]*dxi[[i]])^2, {i, 1, Length[xi]}]^(1/2)
where fError requires that the input function f has all of its variables surrounded by {...}. For example,
d[{mv_, Mv_, Av_}] := 10^(1/5 (mv - Mv + 5 - Av))
FullSimplify[fError[d, {mv, Mv, Av}, {dmv, dMv, dAv}]]
returns
2 Sqrt[10^(-(2/5) (Av - mv + Mv)) (dAv^2 + dmv^2 + dMv^2)] Log[10]
My question is, how can I evaluate this? Ideally I would like to modify fError to something like:
fError[f_, xi__, nxi__, dxi__]
where nxi is the list of actual values of xi (separated since setting the xi's to their numerical values will destroy the differentiation step above.) This function should find the general formula for the error delta_f and then evaluate it numerically, if possible. I think the solution should be as simple as a Hold[] or With[] or something like that, but I can't seem to get it.
I'm not following everything that you've done, and since this was posted two years ago it's likely you aren't working on it anymore anyways. I'll give you my solution for error propagation in hopes that it will somehow help you or others.
I tried to include the best documentation that I could in the video and files linked below. If you open the .cdf file and weed through it you should be able to see my code...
Files:
https://drive.google.com/file/d/0BzKVw6gFYxk_YUk4a25ZRFpKaU0/view?pli=1
Video Tutorial:
https://www.youtube.com/watch?v=q1aM_oSIN7w
-Brian
Edit:
I posted the links because I couldn't attach files and didn't want to post code with no documentation for people new to mathematica. Here's the code directly. I would encourage anyone finding this solution helpful to take a quick look at the documentation because it demonstrates some tricks to improve productivity.
Manipulate[
varlist = ToExpression[variables];
funct = ToExpression[function];
errorFunction[variables, function]
, {variables, "{M,m}"}, {function, "g*(M-m)/(M+m)"},
DisplayAllSteps -> True, LabelStyle -> {FontSize -> 17},
AutoAction -> False,
Initialization :> (
errorFunction[v_, f_] := (
varlist = ToExpression[v];
funct = ToExpression[f];
varlength = Length[Variables[varlist]];
theoretical =
Sqrt[(Total[
Table[(D[funct, Part[varlist, n]]*
Subscript[U, Part[varlist, n]])^2, {n, 1,
varlength}]])];
Part[theoretical, 1];
varlist;
uncert = Table[Subscript[U, Part[varlist, n]], {n, 1, varlength}];
uncert = DeleteCases[uncert, Alternatives ## {0}];
theoretical = Simplify[theoretical];
Column[{Row[{Grid[{
{"Variables", varlist},
{"Uncertainties", uncert},
{"Function", function},
{"Uncertainty Function", theoretical}}, Alignment -> Left,
Spacings -> {2, 1}, Frame -> All,
ItemStyle -> {"Text", FontSize -> 20},
Background -> {{LightGray, None}}]}],
Row[{
Grid[{{"Brian Gennow March/24/2015"}}, Alignment -> Left,
Spacings -> {2, 1}, ItemStyle -> "Text",
Background -> {{None}}]
}]}]))]
This question was posted over 5 years ago, but I ran into the same issue recently and thought I'd share my solution (for uncorrelated errors).
I define a function errorProp that takes two arguments, func and vars. The first argument of errorProp, func, is the symbolic form of the expression for which you wish to calculate the error of its value due to the errors of its arguments. The second argument for errorProp should be a list of the form
{{x1,x1 value, dx1, dx1 value},{x2,x2 value, dx2, dx2 value}, ... ,
{xn,xn value, dxn, dxn value}}
Where the xi's and dxi's are the symbolic representations of the variables and their errors, while the xi value and dxi value are the numerical values of the variable and its uncertainty (see below for an example).
The function errorProp returns the symbolic form of the error, the value of the input function func, and the value of the error of func calculated from the inputs in vars. Here is the code:
ClearAll[errorProp];
errorProp[func_, vars_] := Module[{derivs=Table[0,{Length[vars]}],
funcErrorForm,funcEval,funcErrorEval,rplcVals,rplcErrors},
For[ii = 1, ii <= Length[vars], ii++,
derivs[[ii]] = D[func, vars[[ii, 1]]];
];
funcErrorForm = Sqrt[Sum[(derivs[[ii]]*vars[[ii, 3]])^2,{ii,Length[vars]}]];
SetAttributes[rplcVals, Listable];
rplcVals = Table[Evaluate[vars[[ii, 1]]] :> Evaluate[vars[[ii, 2]]], {ii,
Length[vars]}];
SetAttributes[rplcErrors, Listable];
rplcErrors = Table[Evaluate[vars[[ii, 3]]] :> Evaluate[vars[[ii, 4]]], {ii,
Length[vars]}];
funcEval = func /. rplcVals;
funcErrorEval = funcErrorForm /. rplcVals /. rplcErrors;
Return[{funcErrorForm, funcEval, funcErrorEval}];
];
Here I show an example of errorProp in action with a reasonably complicated function of two variables:
ClearAll[test];
test = Exp[Sqrt[1/y] - x/y];
errorProp[test, {{x, 0.3, dx, 0.005}, {y, 0.9, dy, 0.1}}]
returns
{Sqrt[dy^2 E^(2 Sqrt[1/y] - (2 x)/y) (-(1/2) (1/y)^(3/2) + x/y^2)^2 + (
dx^2 E^(2 Sqrt[1/y] - (2 x)/y))/y^2], 2.05599, 0.0457029}
Calculating using the error propagation formula returns the same result:
{Sqrt[(D[test, x]*dx)^2 + (D[test, y]*dy)^2],
test /. {x :> 0.3, dx :> 0.005, y :> 0.9, dy :> 0.1},
Sqrt[(D[test, x]*dx)^2 + (D[test, y]*dy)^2] /. {x :> 0.3,
dx :> 0.005, y :> 0.9, dy :> 0.1}}
returns
{Sqrt[dy^2 E^(
2 Sqrt[1/y] - (2 x)/y) (-(1/2) (1/y)^(3/2) + x/y^2)^2 + (
dx^2 E^(2 Sqrt[1/y] - (2 x)/y))/y^2], 2.05599, 0.0457029}
Mathematica 12 introduced the Around function that handles error propagation using the differential method.
So although not quite in the format required in the question, but something like this is possible:
expression = a^2*b;
expression /. {a -> Around[aval, da], b -> Around[bval, db]}
output:
aval^2 bval ± Sqrt[aval^4 db^2+4 bval^2 Abs[aval da]^2]
Instead of aval, bval, da, db you can use numerical values as well.
Trying to create a table for quantiles of the sum of two dependent random variables using built-in copula distributions (Clayton, Frank, Gumbel) with Beta marginals. Tried NProbability and FindRoot with various methods -- not fast enough.
An example of the copula-marginal combinations I need to explore is the following:
nProbClayton[t_?NumericQ, c_?NumericQ] :=
NProbability[ x + y <= t, {x, y} \[Distributed]
CopulaDistribution[{"Clayton", c}, {BetaDistribution[8, 2],
BetaDistribution[8, 2]}]]
For a single evaluation of the numeric probability using
nProbClayton[1.9, 1/10] // Timing // Quiet
I get
{4.914, 0.939718}
on a Vista 64bit Core2 Duo T9600 2.80GHz machine (MMA 8.0.4)
To get a quantile of the sum, using
FindRoot[nProbClayton[q, 1/10] == 1/100, {q, 1, 0, 2}// Timing // Quiet
with various methods
( `Method -> Automatic`, `Method -> "Brent"`, `Method -> "Secant"` )
takes about a minute to find a single quantile: Timings are
{48.781, {q -> 0.918646}}
{50.045, {q -> 0.918646}}
{65.396, {q -> 0.918646}}
For other copula-marginal combinations timings are marginally better.
Need: any tricks/methods to improve timings.
The CDF of a Clayton-Pareto copula with parameter c can be calculated according to
cdf[c_] := Module[{c1 = CDF[BetaDistribution[8, 2]]},
(c1[#1]^(-1/c) + c1[#2]^(-1/c) - 1)^(-c) &]
Then, cdf[c][t1,t2] is the probability that x<=t1 and y<=t2. This means that you can calculate the probability that x+y<=t according to
prob[t_?NumericQ, c_?NumericQ] :=
NIntegrate[Derivative[1, 0][cdf[c]][x, t - x], {x, 0, t}]
The timings I get on my machine are
prob[1.9, .1] // Timing
(* ==> {0.087518, 0.939825} *)
Note that I get a different value for the probability from the one in the original post. However, running nProbClayton[1.9,0.1] produces a warning about slow convergence which could mean that the result in the original post is off. Also, if I change x+y<=t to x+y>t in the original definition of nProbClayton and calculate 1-nProbClayton[1.9,0.1] I get 0.939825 (without warnings) which is the same result as above.
For the quantile of the sum I get
FindRoot[prob[q, .1] == .01, {q, 1, 0, 2}] // Timing
(* ==> {1.19123, {q -> 0.912486}} *)
Again, I get a different result from the one in the original post but similar to before, changing x+y<=t to x+y>t and calculating FindRoot[nProbClayton[q, 1/10] == 1-1/100, {q, 1, 0, 2}] returns the same value for q as above.
Many algorithms (like the algorithm for finding the next permutation of a list in lexicographical order) involve finding the index of the last element in a list. However, I haven't been able to find a way to do this in Mathematica that isn't awkward. The most straightforward approach uses LengthWhile, but it means reversing the whole list, which is likely to be inefficient in cases where you know the element you want is near the end of the list and reversing the sense of the predicate:
findLastLengthWhile[list_, predicate_] :=
(Length#list - LengthWhile[Reverse#list, ! predicate## &]) /. (0 -> $Failed)
We could do an explicit, imperative loop with Do, but that winds up being a bit clunky, too. It would help if Return would actually return from a function instead of the Do block, but it doesn't, so you might as well use Break:
findLastDo[list_, pred_] :=
Module[{k, result = $Failed},
Do[
If[pred#list[[k]], result = k; Break[]],
{k, Length#list, 1, -1}];
result]
Ultimately, I decided to iterate using tail-recursion, which means early termination is a little easier. Using the weird but useful #0 notation that lets anonymous functions call themselves, this becomes:
findLastRecursive[list_, pred_] :=
With[{
step =
Which[
#1 == 0, $Failed,
pred#list[[#1]], #1,
True, #0[#1 - 1]] &},
step[Length#list]]
All of this seems too hard, though. Does anyone see a better way?
EDIT to add: Of course, my preferred solution has a bug which means it's broken on long lists because of $IterationLimit.
In[107]:= findLastRecursive[Range[10000], # > 10000 &]
$IterationLimit::itlim: Iteration limit of 4096 exceeded.
Out[107]= (* gack omitted *)
You can fix this with Block:
findLastRecursive[list_, pred_] :=
Block[{$IterationLimit = Infinity},
With[{
step =
Which[
#1 == 0, $Failed,
pred#list[[#1]], #1,
True, #0[#1 - 1]] &},
step[Length#list]]]
$IterationLimit is not my favorite Mathematica feature.
Not really an answer, just a couple of variants on findLastDo.
(1) Actually Return can take an undocumented second argument telling what to return from.
In[74]:= findLastDo2[list_, pred_] :=
Module[{k, result = $Failed},
Do[If[pred#list[[k]], Return[k, Module]], {k, Length#list, 1, -1}];
result]
In[75]:= findLastDo2[Range[25], # <= 22 &]
Out[75]= 22
(2) Better is to use Catch[...Throw...]
In[76]:= findLastDo3[list_, pred_] :=
Catch[Module[{k, result = $Failed},
Do[If[pred#list[[k]], Throw[k]], {k, Length#list, 1, -1}];
result]]
In[77]:= findLastDo3[Range[25], # <= 22 &]
Out[77]= 22
Daniel Lichtblau
For the adventurous...
The following definitions define a wrapper expression reversed[...] that masquerades as a list object whose contents appear to be a reversed version of the wrapped list:
reversed[list_][[i_]] ^:= list[[-i]]
Take[reversed[list_], i_] ^:= Take[list, -i]
Length[reversed[list_]] ^:= Length[list]
Head[reversed[list_]] ^:= List
Sample use:
$list = Range[1000000];
Timing[LengthWhile[reversed[$list], # > 499500 &]]
(* {1.248, 500500} *)
Note that this method is slower than actually reversing the list...
Timing[LengthWhile[Reverse[$list], # > 499500 &]]
(* 0.468, 500500 *)
... but of course it uses much less memory.
I would not recommend this technique for general use as flaws in the masquerade can manifest themselves as subtle bugs. Consider: what other functions need to implemented to make the simulation perfect? The exhibited wrapper definitions are apparently good enough to fool LengthWhile and TakeWhile for simple cases, but other functions (particularly kernel built-ins) may not be so easily fooled. Overriding Head seems particularly fraught with peril.
Notwithstanding these drawbacks, this impersonation technique can sometimes be useful in controlled circumstances.
Personally, I don't see anything wrong with LengthWhile-based solution. Also, if we want to reuse mma built-in list-traversing functions (as opposed to explicit loops or recursion), I don't see a way to avoid reverting the list. Here is a version that does that, but does not reverse the predicate:
Clear[findLastLengthWhile];
findLastLengthWhile[{}, _] = 0;
findLastLengthWhile[list_, predicate_] /; predicate[Last[list]] := Length[list];
findLastLengthWhile[list_, predicate_] :=
Module[{l = Length[list]},
Scan[If[predicate[#], Return[], l--] &, Reverse[list]]; l];
Whether or not it is simpler I don't know. It is certainly less efficient than the one based on LengthWhile, particularly for packed arrays. Also, I use the convention of returning 0 when no element satisfying a condition is found, rather than $Failed, but this is just a personal preference.
EDIT
Here is a recursive version based on linked lists, which is somewhat more efficient:
ClearAll[linkedList, toLinkedList];
SetAttributes[linkedList, HoldAllComplete];
toLinkedList[data_List] := Fold[linkedList, linkedList[], data];
Clear[findLastRec];
findLastRec[list_, pred_] :=
Block[{$IterationLimit = Infinity},
Module[{ll = toLinkedList[list], findLR},
findLR[linkedList[]] := 0;
findLR[linkedList[_, el_?pred], n_] := n;
findLR[linkedList[ll_, _], n_] := findLR[ll, n - 1];
findLR[ll, Length[list]]]]
Some benchmarks:
In[48]:= findLastRecursive[Range[300000],#<9000&]//Timing
Out[48]= {0.734,8999}
In[49]:= findLastRec[Range[300000],#<9000&]//Timing
Out[49]= {0.547,8999}
EDIT 2
If your list can be made a packed array (of whatever dimensions), then you can exploit compilation to C for loop-based solutions. To avoid the compilation overhead, you can memoize the compiled function, like so:
Clear[findLastLW];
findLastLW[predicate_, signature_] := findLastLW[predicate, Verbatim[signature]] =
Block[{list},
With[{sig = List#Prepend[signature, list]},
Compile ## Hold[
sig,
Module[{k, result = 0},
Do[
If[predicate#list[[k]], result = k; Break[]],
{k, Length#list, 1, -1}
];
result],
CompilationTarget -> "C"]]]
The Verbatim part is necessary since in typical signatures like {_Integer,1}, _Integer will otherwise be interpreted as a pattern and the memoized definition won't match. Here is an example:
In[60]:=
fn = findLastLW[#<9000&,{_Integer,1}];
fn[Range[300000]]//Timing
Out[61]= {0.016,8999}
EDIT 3
Here is a much more compact and faster version of recursive solution based on linked lists:
Clear[findLastRecAlt];
findLastRecAlt[{}, _] = 0;
findLastRecAlt[list_, pred_] :=
Module[{lls, tag},
Block[{$IterationLimit = Infinity, linkedList},
SetAttributes[linkedList, HoldAllComplete];
lls = Fold[linkedList, linkedList[], list];
ll : linkedList[_, el_?pred] := Throw[Depth[Unevaluated[ll]] - 2, tag];
linkedList[ll_, _] := ll;
Catch[lls, tag]/. linkedList[] :> 0]]
It is as fast as versions based on Do - loops, and twice faster than the original findLastRecursive (the relevant benchmark to be added soon - I can not do consistent (with previous) benchmarks being on a different machine at the moment). I think this is a good illustration of the fact that tail-recursive solutions in mma can be as efficient as procedural (uncompiled) ones.
Here are some alternatives, two of which don't reverse the list:
findLastLengthWhile2[list_, predicate_] :=
Length[list]-(Position[list//Reverse, _?(!predicate[#] &),1,1]/.{}->{{0}})[[1, 1]]+1
findLastLengthWhile3[list_, predicate_] :=
Module[{lw = 0},
Scan[If[predicate[#], lw++, lw = 0] &, list];
Length[list] - lw
]
findLastLengthWhile4[list_, predicate_] :=
Module[{a}, a = Split[list, predicate];
Length[list] - If[predicate[a[[-1, 1]]], Length[a[[-1]]], 0]
]
Some timings (number 1 is Pillsy's first one) of finding the last run of 1's in an array of 100,000 1's in which a single zero is placed on various positions. Timings are the mean of 10 repeated meusurements:
Code used for timings:
Monitor[
timings = Table[
ri = ConstantArray[1, {100000}];
ri[[daZero]] = 0;
t1 = (a1 = findLastLengthWhile[ri, # == 1 &];) // Timing // First;
t2 = (a2 = findLastLengthWhile2[ri, # == 1 &];) // Timing // First;
t3 = (a3 = findLastLengthWhile3[ri, # == 1 &];) // Timing // First;
t4 = (a4 = findLastLengthWhile4[ri, # == 1 &];) // Timing // First;
{t1, t2, t3, t4},
{daZero, {1000, 10000, 20000, 50000, 80000, 90000, 99000}}, {10}
], {daZero}
]
ListLinePlot[
Transpose[{{1000, 10000, 20000, 50000, 80000, 90000,99000}, #}] & /#
(Mean /# timings // Transpose),
Mesh -> All, Frame -> True, FrameLabel -> {"Zero position", "Time (s)", "", ""},
BaseStyle -> {FontFamily -> "Arial", FontWeight -> Bold,
FontSize -> 14}, ImageSize -> 500
]
Timing Reverse for Strings and Reals
a = DictionaryLookup[__];
b = RandomReal[1, 10^6];
Timing[Short#Reverse##] & /# {a, b}
(*
->
{{0.016, {Zyuganov,Zyrtec,zymurgy,zygotic,zygotes,...}},
{3.40006*10^-15,{0.693684,0.327367,<<999997>>,0.414146}}}
*)
An elegant solution would be:
findLastPatternMatching[{Longest[start___], f_, ___}, f_] := Length[{start}]+1
(* match this pattern if item not in list *)
findLastPatternMatching[_, _] := -1
but as it's based on pattern matching, it's way slower than the other solutions suggested.
I'm trying to figure out how to use Mathematica to solve systems of equations where some of the variables and coefficients are vectors. A simple example would be something like
where I know A, V, and the magnitude of P, and I have to solve for t and the direction of P. (Basically, given two rays A and B, where I know everything about A but only the origin and magnitude of B, figure out what the direction of B must be such that it intersects A.)
Now, I know how to solve this sort of thing by hand, but that's slow and error-prone, so I was hoping I could use Mathematica to speed things along and error-check me. However, I can't see how to get Mathematica to symbolically solve equations involving vectors like this.
I've looked in the VectorAnalysis package, without finding anything there that seems relevant; meanwhile the Linear Algebra package only seems to have a solver for linear systems (which this isn't, since I don't know t or P, just |P|).
I tried doing the simpleminded thing: expanding the vectors into their components (pretend they're 3D) and solving them as if I were trying to equate two parametric functions,
Solve[
{ Function[t, {Bx + Vx*t, By + Vy*t, Bz + Vz*t}][t] ==
Function[t, {Px*t, Py*t, Pz*t}][t],
Px^2 + Py^2 + Pz^2 == Q^2 } ,
{ t, Px, Py, Pz }
]
but the "solution" that spits out is a huge mess of coefficients and congestion. It also forces me to expand out each of the dimensions I feed it.
What I want is a nice symbolic solution in terms of dot products, cross products, and norms:
But I can't see how to tell Solve that some of the coefficients are vectors instead of scalars.
Is this possible? Can Mathematica give me symbolic solutions on vectors? Or should I just stick with No.2 Pencil technology?
(Just to be clear, I'm not interested in the solution to the particular equation at top -- I'm asking if I can use Mathematica to solve computational geometry problems like that generally without my having to express everything as an explicit matrix of {Ax, Ay, Az}, etc.)
With Mathematica 7.0.1.0
Clear[A, V, P];
A = {1, 2, 3};
V = {4, 5, 6};
P = {P1, P2, P3};
Solve[A + V t == P, P]
outputs:
{{P1 -> 1 + 4 t, P2 -> 2 + 5 t, P3 -> 3 (1 + 2 t)}}
Typing out P = {P1, P2, P3} can be annoying if the array or matrix is large.
Clear[A, V, PP, P];
A = {1, 2, 3};
V = {4, 5, 6};
PP = Array[P, 3];
Solve[A + V t == PP, PP]
outputs:
{{P[1] -> 1 + 4 t, P[2] -> 2 + 5 t, P[3] -> 3 (1 + 2 t)}}
Matrix vector inner product:
Clear[A, xx, bb];
A = {{1, 5}, {6, 7}};
xx = Array[x, 2];
bb = Array[b, 2];
Solve[A.xx == bb, xx]
outputs:
{{x[1] -> 1/23 (-7 b[1] + 5 b[2]), x[2] -> 1/23 (6 b[1] - b[2])}}
Matrix multiplication:
Clear[A, BB, d];
A = {{1, 5}, {6, 7}};
BB = Array[B, {2, 2}];
d = {{6, 7}, {8, 9}};
Solve[A.BB == d]
outputs:
{{B[1, 1] -> -(2/23), B[2, 1] -> 28/23, B[1, 2] -> -(4/23), B[2, 2] -> 33/23}}
The dot product has an infix notation built in just use a period for the dot.
I do not think the cross product does however. This is how you use the Notation package to make one. "X" will become our infix form of Cross. I suggest coping the example from the Notation, Symbolize and InfixNotation tutorial. Also use the Notation Palette which helps abstract away some of the Box syntax.
Clear[X]
Needs["Notation`"]
Notation[x_ X y_\[DoubleLongLeftRightArrow]Cross[x_, y_]]
Notation[NotationTemplateTag[
RowBox[{x_, , X, , y_, }]] \[DoubleLongLeftRightArrow]
NotationTemplateTag[RowBox[{ ,
RowBox[{Cross, [,
RowBox[{x_, ,, y_}], ]}]}]]]
{a, b, c} X {x, y, z}
outputs:
{-c y + b z, c x - a z, -b x + a y}
The above looks horrible but when using the Notation Palette it looks like:
Clear[X]
Needs["Notation`"]
Notation[x_ X y_\[DoubleLongLeftRightArrow]Cross[x_, y_]]
{a, b, c} X {x, y, z}
I have run into some quirks using the notation package in the past versions of mathematica so be careful.
I don't have a general solution for you by any means (MathForum may be the better way to go), but there are some tips that I can offer you. The first is to do the expansion of your vectors into components in a more systematic way. For instance, I would solve the equation you wrote as follows.
rawSol = With[{coords = {x, y, z}},
Solve[
Flatten[
{A[#] + V[#] t == P[#] t & /# coords,
Total[P[#]^2 & /# coords] == P^2}],
Flatten[{t, P /# coords}]]];
Then you can work with the rawSol variable more easily. Next, because you are referring the vector components in a uniform way (always matching the Mathematica pattern v_[x|y|z]), you can define rules that will aid in simplifying them. I played around a bit before coming up with the following rules:
vectorRules =
{forms___ + vec_[x]^2 + vec_[y]^2 + vec_[z]^2 :> forms + vec^2,
forms___ + c_. v1_[x]*v2_[x] + c_. v1_[y]*v2_[y] + c_. v1_[z]*v2_[z] :>
forms + c v1\[CenterDot]v2};
These rules will simplify the relationships for vector norms and dot products (cross-products are left as a likely painful exercise for the reader). EDIT: rcollyer pointed out that you can make c optional in the rule for dot products, so you only need two rules for norms and dot products.
With these rules, I was immediately able to simplify the solution for t into a form very close to yours:
In[3] := t /. rawSol //. vectorRules // Simplify // InputForm
Out[3] = {(A \[CenterDot] V - Sqrt[A^2*(P^2 - V^2) +
(A \[CenterDot] V)^2])/(P^2 - V^2),
(A \[CenterDot] V + Sqrt[A^2*(P^2 - V^2) +
(A \[CenterDot] V)^2])/(P^2 - V^2)}
Like I said, it's not a complete way of solving these kinds of problems by any means, but if you're careful about casting the problem into terms that are easy to work with from a pattern-matching and rule-replacement standpoint, you can go pretty far.
I've taken a somewhat different approach to this issue. I've made some definitions that return this output:
Patterns that are known to be vector quantities may be specified using vec[_], patterns that have an OverVector[] or OverHat[] wrapper (symbols with a vector or hat over them) are assumed to be vectors by default.
The definitions are experimental and should be treated as such, but they seem to work well. I expect to add to this over time.
Here are the definitions. The need to be pasted into a Mathematica Notebook cell and converted to StandardForm to see them properly.
Unprotect[vExpand,vExpand$,Cross,Plus,Times,CenterDot];
(* vec[pat] determines if pat is a vector quantity.
vec[pat] can be used to define patterns that should be treated as vectors.
Default: Patterns are assumed to be scalar unless otherwise defined *)
vec[_]:=False;
(* Symbols with a vector hat, or vector operations on vectors are assumed to be vectors *)
vec[OverVector[_]]:=True;
vec[OverHat[_]]:=True;
vec[u_?vec+v_?vec]:=True;
vec[u_?vec-v_?vec]:=True;
vec[u_?vec\[Cross]v_?vec]:=True;
vec[u_?VectorQ]:=True;
(* Placeholder for matrix types *)
mat[a_]:=False;
(* Anything not defined as a vector or matrix is a scalar *)
scal[x_]:=!(vec[x]\[Or]mat[x]);
scal[x_?scal+y_?scal]:=True;scal[x_?scal y_?scal]:=True;
(* Scalars times vectors are vectors *)
vec[a_?scal u_?vec]:=True;
mat[a_?scal m_?mat]:=True;
vExpand$[u_?vec\[Cross](v_?vec+w_?vec)]:=vExpand$[u\[Cross]v]+vExpand$[u\[Cross]w];
vExpand$[(u_?vec+v_?vec)\[Cross]w_?vec]:=vExpand$[u\[Cross]w]+vExpand$[v\[Cross]w];
vExpand$[u_?vec\[CenterDot](v_?vec+w_?vec)]:=vExpand$[u\[CenterDot]v]+vExpand$[u\[CenterDot]w];
vExpand$[(u_?vec+v_?vec)\[CenterDot]w_?vec]:=vExpand$[u\[CenterDot]w]+vExpand$[v\[CenterDot]w];
vExpand$[s_?scal (u_?vec\[Cross]v_?vec)]:=Expand[s] vExpand$[u\[Cross]v];
vExpand$[s_?scal (u_?vec\[CenterDot]v_?vec)]:=Expand[s] vExpand$[u\[CenterDot]v];
vExpand$[Plus[x__]]:=vExpand$/#Plus[x];
vExpand$[s_?scal,Plus[x__]]:=Expand[s](vExpand$/#Plus[x]);
vExpand$[Times[x__]]:=vExpand$/#Times[x];
vExpand[e_]:=e//.e:>Expand[vExpand$[e]]
(* Some simplification rules *)
(u_?vec\[Cross]u_?vec):=\!\(\*OverscriptBox["0", "\[RightVector]"]\);
(u_?vec+\!\(\*OverscriptBox["0", "\[RightVector]"]\)):=u;
0v_?vec:=\!\(\*OverscriptBox["0", "\[RightVector]"]\);
\!\(\*OverscriptBox["0", "\[RightVector]"]\)\[CenterDot]v_?vec:=0;
v_?vec\[CenterDot]\!\(\*OverscriptBox["0", "\[RightVector]"]\):=0;
(a_?scal u_?vec)\[Cross]v_?vec :=a u\[Cross]v;u_?vec\[Cross](a_?scal v_?vec ):=a u\[Cross]v;
(a_?scal u_?vec)\[CenterDot]v_?vec :=a u\[CenterDot]v;
u_?vec\[CenterDot](a_?scal v_?vec) :=a u\[CenterDot]v;
(* Stealing behavior from Dot *)
Attributes[CenterDot]=Attributes[Dot];
Protect[vExpand,vExpand$,Cross,Plus,Times,CenterDot];