Getting Indices from Mathematica's Select - wolfram-mathematica

How can I get the indices of a selection rather than the values. I.e.
list={3->4, 5->2, 1->1, 5->8, 3->2};
Select[list, #[[1]]==5&]; (* returns {5->2, 5->8} *)
I would like something like
SelectIndices[list, #[[1]]==5&]; (* returns {2, 4} *)
EDIT: I found an answer to the immediate question above (see below), but what about sorting. Say I want to sort a list but rather than returning the sorted list, I want to return the indices in the order of the sorted list?

Ok, well, I figured out a way to do this. Mathematica uses such a different vocabulary that searching the documentation still is generally unfruitful for me (I had been searching for things like, "Element index from Mathematica Select", to no avail.)
Anyway, this seems to be the way to do this:
Position[list, 5->_];
I guess its time to read up on patterns in Mathematica.

WRT to the question remaining after your edit: How about Ordering?
In[26]:= Ordering[{c, x, b, z, h}]
Out[26]= {3, 1, 5, 2, 4}
In[28]:= {c, x, b, z, h}[[Ordering[{c, x, b, z, h}]]]
Out[28]= {b, c, h, x, z}
In[27]:= Sort[{c, x, b, z, h}]
Out[27]= {b, c, h, x, z}

I think you want Ordering:
Sort[list, #[[1]] == 5 &]
Ordering[list, All, #[[1]] == 5 &]
(*
{5->2,5->8,3->2,1->1,3->4}
{2,4,5,3,1}
*)

Sorry, I had read your question to fast.
I think your second question is about how to sort the list according to the values of the rules. The simplest way that come to mind is by using a compare function. then simply use your solution to retrieve the indices:
comp[_ -> x_, a_ -> y_] := x < y;
Position[Sort[list, comp], 5 -> _]
Hope this helps!

Without sorting or otherwise altering the list, do this:
SelectIndex[list_, fn_] := Module[{x},
x = Reap[For[i = 1, i < Length[list], i++, If[fn[list[[i]]], Sow[i], Null];]];
If[x[[1]] == {}, {}, x[[2, 1]]]]
list={ {"foo",1}, {"bar",2}};
SelectIndex[list, StringMatchQ[ #[[1]], "foo*"] &]
You can use that to extract records from a database
Lookup[list_, query_, column_: 1, resultColumns_: All] := list[[SelectIndex[list, StringMatchQ[query, #[[column]]] &], resultColumns]]
Lookup(list,"foo")

Related

Mathematica: FindFit for NIntegrate of ParametricNDSolve

I`ve seen several answers for quite similar topics with usage of ?NumericQ explained and still can not quite understand what is wrong with my implementation and could my example be evaluated at all the way I want it.
I have solution of differential equation in form of ParametricNDSolve (I believe that exact form of equation is irrelevant):
sol = ParametricNDSolve[{n'[t] == g/(1/(y - f*y) + b*t + g*t)^2 - a*n[t] - c*n[t]^2, n[0] == y*f}, {n}, {t, 0, 10}, {a, b, c, g, f, y}]
After that I am trying to construct a function for FindFit or similar procedure, Nintegrating over function n[a,b,c,g,f,y,t] I have got above with some multiplier (I have chosen Log[z] as multiplier for simplicity)
Func[z_, a_, b_, c_, g_, f_] :=
NIntegrate[
Log[z]*(n[a, b, c, g, f, y][t] /. sol), {t, 0, 10}, {y, 0, Log[z]}]
So I have NIntegrate over my function n[params,t] derived from ParametricNDSolve with multiplier introducing new variable (z) wich also present in the limits of integration (in the same form as in multiplier for simplicity of example)
I am able to evaluate the values of my function Func at any point (z) with given values of parameters (a,b,c,g,f): Func(0,1,2,3,4,5) could be evaluated.
But for some reasons I cannot use FindFit like that:
FindFit[data, Func[z, a, b, c, g, f], {a, b, c, g, f}, z]
The error is: NIntegrate::nlim: y = Log[z] is not a valid limit of integration.
I`ve tried a lot of different combinations of ?NumericQ usage and all seems to lead nowhere. Any help would be appreciated!
Thanks in advance and sorry for pure english in the problem explanation.
Here is a way to define your function:
sol = n /.
ParametricNDSolve[{n'[t] ==
g/(1/(y - f*y) + b*t + g*t)^2 - a*n[t] - c*n[t]^2,
n[0] == y*f}, {n}, {t, 0, 10}, {a, b, c, g, f, y}]
Func[z_?NumericQ, a_?NumericQ, b_?NumericQ, c_?NumericQ, g_?NumericQ,
f_?NumericQ] :=
NIntegrate[Log[z]*sol[a, b, c, g, f, y][t],
{t, 0, 10}, {y, 0, Log[z]}]
test: Func[2, .45, .5, .13, .12, .2] -> 0.106107
I'm not optimistic you will get good results from FindFit with a function with so many parameters and which is so computationally expensive.

Mathematica, nested optimization

I want to use the solution of Maximization, defined as a function, in another function. Here's an example:
f1[y_] := x /. Last[Maximize[{Sin[x y], Abs[x] <= y}, x]] (* or any other function *)
This definition is fine, for example if I give f1[4], I get answer -((3 \[Pi])/8).
The problem is that when I want to use it in another function I get error. For example:
FindRoot[f1[y] == Pi/4, {y, 1}]
Gives me the following error:
ReplaceAll::reps: {x} is neither a list of replacement rules nor a valid dispatch table, and so cannot be used for replacing. >>
FindRoot::nlnum: The function value {-0.785398+(x/.x)} is not a list of numbers with dimensions {1} at {y} = {1.}. >>
I've been struggling with this for several days now! Any comment, idea, help, ... is deeply appreciated! Thank you very much!
When y is not a number, your Maximize cannot be resolved, in which case the Last element of it is x, which is why you get that odd error message. You can resolve this by clearing the bad definition of f1 and making a new one that ensures only numeric arguments are evaluated:
ClearAll[f1]
f1[y_?NumericQ] := x /. Last[Maximize[{Sin[x y], Abs[x] <= y}, x]]
FindRoot[f1[y] == \[Pi]/4, {y, 1}]
(* {y -> 0.785398} *)

function iteration in mathematica

I'm a beginner in Mathematica programming. My code is not running as expected. I wonder if anyone could examine what goes wrong? Here is part of the code.
F[{k_, n_, x_}] =
Which[k == 0, f[a, b, x],
k == 1, g[a, b, n, x],
k == 2, h[c, d, n, x]]
G[x_] = F[{0, 0, x}]
While[Extract[G[x], 1] != 3, G[x_] = F[G[x]]]
The functions f, g and h are defined by Which as is F, and they are all vector-valued so that it makes sense to iterate F. What I want to achieve is: given initial value {0,0,x}, keep iterating F until the first component of F becomes 3. Is there anything, e.g. syntax, wrong in the above code?
Thanks!
You need to use SetDelayed (:=) for function definitions like: F[x_]:=x. When you use Set (=) like F[x_]=x, the it is essentially the same as F[_]=x since the definition isn't delayed until evaluation, so there is no way to transfer the matched pattern on the left hand side into the evaluation of the right hand side.
As jVincent mentioned, I would use := instead of = while defining F.
I would also use the built in NestWhile instead of manually iterating.
NestWhile[F, {0, 0, x}, Function[e, Extract[e, 1] != 3]]
I can't quite comment on how to correct the code as written because I'm not entirely sure how reassigning G in the While works.

How to take a list and generate all lists of increasing length?

Any easy question for the Mathematica experts here:
Given a list, say
Clear[a, b, c];
data = {a, b, c};
and I want to get back all lists of length 1,2,3,...Length[data] starting from the start to the end, so that I get the following for the above
out = {{a}, {a, b}, {a, b, c}}
I looked at the commands in M to find a ready one to use, and I could (looked at all the Map's and Nest* functions, but not that I can see how to use for this). I am sure it is there, but I am not seeing it now.
now I do this silly Do loop to build it
m=Length[data];
First#Reap[Do[Sow[data[[1;;i]]],{i,1,m}]][[2]]
{{a},{a,b},{a,b,c}}
question is: does Mathematica have a build-in command to do the above?
update 8 am
I've deleted the tests I've done an hr ago and will be reposting them again soon. I need to run them few times and take an average as that is the better way to do this performance test.
update 9 am
Ok, I've re-run the performance tests on all solutions shown below. 8 methods.
For each method, I run it 5 times and took the mean.
I did this for n={1000, 5000, 10000, 15000, 25000, 30000} where n is the length of the original list to process.
can't go much over 30,000, will run out of ram. I only have 4 GB ram.
I made a small function called makeTable[n, methods] which generate performance table for specific n. test code is below (written quickly so not the most clean code, not very functional, as I have to go :), but it is below and any one can change/clean it, etc... if they want
conclusion: Kguler method was the fastest, with Thies method almost the same for large n, (30,000), so for all practical purposes, may be Thies and Kguler methods can be declared as the winners for large n? But since Kguler is also fastest for small n, so far, he gets the clear edge.
Again, test code is below for any one to check and run to see if I might made an error somewhere. As correctly predicted by Leonid, the linked list method did not fare too well for large n.
I think more tests are needed, as only taking the mean of 5 might not be enough, also other considerations I might have missed. This is not an exact test, just a rough one to get an idea.
I tried not to use the pc much while running the tests. I used AbsoluteTiming[] to measure cpu.
Here is screen shot of the tables generated
Here is the test code:
methods = {nasser, wizard1, wizard2, wizard3, kguler, leonid1,
leonid2, thies};
AppendTo[$ContextPath, "Internal`"];
ClearAll[linkedList, leonid2];
SetAttributes[linkedList, HoldAllComplete];
nasser[lst_] := Module[{m = Length[lst]},
First#Reap[Do[Sow[lst[[1 ;; i]]], {i, 1, m}]][[2]]
];
wizard1[lst_] := Module[{},
Take[lst, #] & /# Range#Length#lst
];
wizard2[lst_] := Module[{},
Table[Take[#, i], {i, Length##}] & #lst
];
wizard3[lst_] := Module[{},
Rest#FoldList[Append, {}, #] & #lst
];
kguler[lst_] := Module[{},
Reverse#NestList[Most, #, Length[#] - 1] & #lst
];
leonid1[lst_] := Module[{b = Bag[{}]},
Map[(StuffBag[b, #]; BagPart[b, All]) &, lst]
];
leonid2[lst_] := Module[{},
Map[List ## Flatten[#, Infinity, linkedList] &,
FoldList[linkedList, linkedList[First#lst], Rest#lst]]
];
thies[lst_] :=
Module[{},
Drop[Reverse#
FixedPointList[If[Length[#] > 0, Most, Identity][#] &, lst], 2]
];
makeTable[n_, methods_] :=
Module[{nTests = Length[methods], nTries = 5, i, j, tests, lst},
lst = Table[RandomReal[], {n}];
tests = Table[0, {nTests}, {nTries}];
For[i = 1, i <= nTests, i++,
For[j = 1, j <= nTries, j++,
tests[[i, j]] = First#AbsoluteTiming[methods[[i]][lst] ]
]
];
tbl = Table[{ToString[methods[[i]]], Mean[ tests[[i, All]] ]}, {i,
nTests}] ;
Grid[Join[{{"method", "cpu"}}, tbl],
Frame -> All, FrameStyle -> Directive[Thickness[.005], Gray],
Spacings -> {0.5, 1}
]
];
Now to run, do
makeTable[1000, methods]
Warning, do not try something over 30,000 unless you have zillion GB, else M might not return. It happened to me, and had to reboot the PC.
update 12/26/11 3:30PM
I see that Thies has a newer version of this algorithm (I called it thies2 in the methods table), so I re-run everything again, here is the updated table, I removed the linked list version since it is known in advance not to be fast for large n, and this time, I run them each for 10 times (not 5 as above) and then took the mean). I also started M fresh using factory setting (restarted it holding alt-shift keys so that all setting are back to original settings just in case)
conclusion so far
Kugler is fastest for smaller n, i.e. n<20,000.
For larger n, now Thies second version is faster than Thies version 1 and now it edges ahead ever so slightly ahead of Kugler method for large n. Congratulation to Thies, the current lead in this performance test. But for all practical purposes, I would say both Thies and Kugler methods are the fastest for large n, and Kugler remain the fastest for smaller n.
Below are tables and the updated test code below them. Any one is free to run the tests for themselves, just in case I might overlooked something.
The current test code:
$MinPrecision = $MachinePrecision;
$MaxPrecision = $MachinePrecision;
methods = {nasser, wizard1, wizard2, wizard3, kguler, leonid, thies1,
thies2};
AppendTo[$ContextPath, "Internal`"];
nasser[lst_] := Module[{m = Length[lst]},
First#Reap[Do[Sow[lst[[1 ;; i]]], {i, 1, m}]][[2]]
];
wizard1[lst_] := Module[{},
Take[lst, #] & /# Range#Length#lst
];
wizard2[lst_] := Module[{},
Table[Take[#, i], {i, Length##}] & #lst
];
wizard3[lst_] := Module[{},
Rest#FoldList[Append, {}, #] & #lst
];
kguler[lst_] := Module[{},
Reverse#NestList[Most, #, Length[#] - 1] & #lst
];
leonid[lst_] := Module[{b = Bag[{}]},
Map[(StuffBag[b, #]; BagPart[b, All]) &, lst]
];
thies1[lst_] :=
Module[{},
Drop[Reverse#
FixedPointList[If[Length[#] > 0, Most, Identity][#] &, lst], 2]
];
thies2[lst_] :=
Module[{},
Drop[Reverse#
FixedPointList[If[# =!= {}, Most, Identity][#] &, lst], 2]
];
makeTable[n_Integer, methods_List] :=
Module[{nTests = Length[methods], nTries = 10, i, j, tests, lst},
lst = Table[RandomReal[], {n}];
tests = Table[0, {nTests}, {nTries}];
For[i = 1, i <= nTests, i++,
For[j = 1, j <= nTries, j++,
tests[[i, j]] = First#AbsoluteTiming[methods[[i]][lst] ]
]
];
tbl = Table[{ToString[methods[[i]]], Mean[ tests[[i, All]] ]}, {i,
nTests}] ;
Grid[Join[{{"method", "cpu"}}, tbl],
Frame -> All, FrameStyle -> Directive[Thickness[.005], Gray],
Spacings -> {0.5, 1}
]
];
To run type
n=1000
makeTable[n, methods]
Thanks for everyone for their answers, I learned from all of them.
You can use
f = Reverse#NestList[Most, #, Length[#] - 1] &
f#{a,b,c,d,e} gives {{a}, {a, b}, {a, b, c}, {a, b, c, d}, {a, b, c, d, e}}.
An alternative using ReplaceList -- much slower than f, but ... why not?:
g = ReplaceList[#, {x__, ___} -> {x}] &
I propose this:
runs[lst_] := Take[lst, #] & /# Range#Length#lst
Or this:
runs2 = Table[Take[#, i], {i, Length##}] &;
kguler's answer inspired me to write this:
Rest#FoldList[Append, {}, #] &
But this is slower than his method because of Mathematica's slow appends.
Here is another method which is roughly as efficient as the one involving Take, but uses the Internal`Bag functionality:
AppendTo[$ContextPath, "Internal`"];
runsB[lst_] :=
Module[{b = Bag[{}]}, Map[(StuffBag[b, #]; BagPart[b, All]) &, lst]];
I don't claim that it is simpler than the one based on Take, but it seems to be a simple example of Internal`Bag at work - since this is exactly the type of problem for which these can be successfully used (and there might be cases where lists of explicit positions would either not be available or expensive to compute).
Just to compare, the solution based on linked lists:
ClearAll[linkedList, runsLL];
SetAttributes[linkedList, HoldAllComplete];
runsLL[lst_] :=
Map[List ## Flatten[#, Infinity, linkedList] &,
FoldList[linkedList, linkedList[First#lst], Rest#lst]]
will be an order of magnitude slower on large lists.
Another idea:
Inits[l_] := Drop[Reverse#FixedPointList[
If[Length[#] > 0, Most, Identity][#] &,
l
], 2];
Update:
This version is a bit faster by omitting computing the length each time:
Inits2[l_] := Drop[Reverse#FixedPointList[
If[# =!= {}, Most, Identity][#] &,
l
], 2];
Probably not the most efficient, but another approach:
dow[lst_] := lst[[1 ;; #]] & /# Range#Length#lst
For example:
dow[{a, b, c, d, ee}]
gives:
{{a}, {a, b}, {a, b, c}, {a, b, c, d}, {a, b, c, d, ee}}

Mathematica: branch points for real roots of polynomial

I am doing a brute force search for "gradient extremals" on the following example function
fv[{x_, y_}] = ((y - (x/4)^2)^2 + 1/(4 (1 + (x - 1)^2)))/2;
This involves finding the following zeros
gecond = With[{g = D[fv[{x, y}], {{x, y}}], h = D[fv[{x, y}], {{x, y}, 2}]},
g.RotationMatrix[Pi/2].h.g == 0]
Which Reduce happily does for me:
geyvals = y /. Cases[List#ToRules#Reduce[gecond, {x, y}], {y -> _}];
geyvals is the three roots of a cubic polynomial, but the expression is a bit large to put here.
Now to my question: For different values of x, different numbers of these roots are real, and I would like to pick out the values of x where the solutions branch in order to piece together the gradient extremals along the valley floor (of fv). In the present case, since the polynomial is only cubic, I could probably do it by hand -- but I am looking for a simple way of having Mathematica do it for me?
Edit: To clarify: The gradient extremals stuff is just background -- and a simple way to set up a hard problem. I am not so interested in the specific solution to this problem as in a general hand-off way of spotting the branch points for polynomial roots. Have added an answer below with a working approach.
Edit 2: Since it seems that the actual problem is much more fun than root branching: rcollyer suggests using ContourPlot directly on gecond to get the gradient extremals. To make this complete we need to separate valleys and ridges, which is done by looking at the eigenvalue of the Hessian perpendicular to the gradient. Putting a check for "valleynes" in as a RegionFunction we are left with only the valley line:
valleycond = With[{
g = D[fv[{x, y}], {{x, y}}],
h = D[fv[{x, y}], {{x, y}, 2}]},
g.RotationMatrix[Pi/2].h.RotationMatrix[-Pi/2].g >= 0];
gbuf["gevalley"]=ContourPlot[gecond // Evaluate, {x, -2, 4}, {y, -.5, 1.2},
RegionFunction -> Function[{x, y}, Evaluate#valleycond],
PlotPoints -> 41];
Which gives just the valley floor line. Including some contours and the saddle point:
fvSaddlept = {x, y} /. First#Solve[Thread[D[fv[{x, y}], {{x, y}}] == {0, 0}]]
gbuf["contours"] = ContourPlot[fv[{x, y}],
{x, -2, 4}, {y, -.7, 1.5}, PlotRange -> {0, 1/2},
Contours -> fv#fvSaddlept (Range[6]/3 - .01),
PlotPoints -> 41, AspectRatio -> Automatic, ContourShading -> None];
gbuf["saddle"] = Graphics[{Red, Point[fvSaddlept]}];
Show[gbuf /# {"contours", "saddle", "gevalley"}]
We end up with a plot like this:
Not sure if this (belatedly) helps, but it seems you are interested in discriminant points, that is, where both polynomial and derivative (wrt y) vanish. You can solve this system for {x,y} and throw away complex solutions as below.
fv[{x_, y_}] = ((y - (x/4)^2)^2 + 1/(4 (1 + (x - 1)^2)))/2;
gecond = With[{g = D[fv[{x, y}], {{x, y}}],
h = D[fv[{x, y}], {{x, y}, 2}]}, g.RotationMatrix[Pi/2].h.g]
In[14]:= Cases[{x, y} /.
NSolve[{gecond, D[gecond, y]} == 0, {x, y}], {_Real, _Real}]
Out[14]= {{-0.0158768, -15.2464}, {1.05635, -0.963629}, {1.,
0.0625}, {1., 0.0625}}
If you only want to plot the result then use StreamPlot[] on the gradients:
grad = D[fv[{x, y}], {{x, y}}];
StreamPlot[grad, {x, -5, 5}, {y, -5, 5},
RegionFunction -> Function[{x, y}, fv[{x, y}] < 1],
StreamScale -> 1]
You may have to fiddle around with the plot's precision, StreamStyle, and the RegionFunction to get it perfect. Especially useful would be using the solution for the valley floor to seed StreamPoints programmatically.
Updated: see below.
I'd approach this first by visualizing the imaginary parts of the roots:
This tells you three things immediately: 1) the first root is always real, 2) the second two are the conjugate pairs, and 3) there is a small region near zero in which all three are real. Additionally, note that the exclusions only got rid of the singular point at x=0, and we can see why when we zoom in:
We can then use the EvalutionMonitor to generate the list of roots directly:
Map[Module[{f, fcn = #1},
f[x_] := Im[fcn];
Reap[Plot[f[x], {x, 0, 1.5},
Exclusions -> {True, f[x] == 1, f[x] == -1},
EvaluationMonitor :> Sow[{x, f[x]}][[2, 1]] //
SortBy[#, First] &];]
]&, geyvals]
(Note, the Part specification is a little odd, Reap returns a List of what is sown as the second item in a List, so this results in a nested list. Also, Plot doesn't sample the points in a straightforward manner, so SortBy is needed.) There may be a more elegant route to determine where the last two roots become complex, but since their imaginary parts are piecewise continuous, it just seemed easier to brute force it.
Edit: Since you've mentioned that you want an automatic method for generating where some of the roots become complex, I've been exploring what happens when you substitute in y -> p + I q. Now this assumes that x is real, but you've already done that in your solution. Specifically, I do the following
In[1] := poly = g.RotationMatrix[Pi/2].h.g /. {y -> p + I q} // ComplexExpand;
In[2] := {pr,pi} = poly /. Complex[a_, b_] :> a + z b & // CoefficientList[#, z] & //
Simplify[#, {x, p, q} \[Element] Reals]&;
where the second step allows me to isolate the real and imaginary parts of the equation and simplify them independent of each other. Doing this same thing with the generic 2D polynomial, f + d x + a x^2 + e y + 2 c x y + b y^2, but making both x and y complex; I noted that Im[poly] = Im[x] D[poly, Im[x]] + Im[y] D[poly,[y]], and this may hold for your equation, also. By making x real, the imaginary part of poly becomes q times some function of x, p, and q. So, setting q=0 always gives Im[poly] == 0. But, that does not tell us anything new. However, if we
In[3] := qvals = Cases[List#ToRules#RReduce[ pi == 0 && q != 0, {x,p,q}],
{q -> a_}:> a];
we get several formulas for q involving x and p. For some values of x and p, those formulas may be imaginary, and we can use Reduce to determine where Re[qvals] == 0. In other words, we want the "imaginary" part of y to be real and this can be accomplished by allowing q to be zero or purely imaginary. Plotting the region where Re[q]==0 and overlaying the gradient extremal lines via
With[{rngs = Sequence[{x,-2,2},{y,-10,10}]},
Show#{
RegionPlot[Evaluate[Thread[Re[qvals]==0]/.p-> y], rngs],
ContourPlot[g.RotationMatrix[Pi/2].h.g==0,rngs
ContourStyle -> {Darker#Red,Dashed}]}]
gives
which confirms the regions in the first two plots showing the 3 real roots.
Ended up trying myself since the goal really was to do it 'hands off'. I'll leave the question open for a good while to see if anybody finds a better way.
The code below uses bisection to bracket the points where CountRoots changes value. This works for my case (spotting the singularity at x=0 is pure luck):
In[214]:= findRootBranches[Function[x, Evaluate#geyvals[[1, 1]]], {-5, 5}]
Out[214]= {{{-5., -0.0158768}, 1}, {{-0.0158768, -5.96046*10^-9}, 3}, {{0., 0.}, 2}, {{5.96046*10^-9, 1.05635}, 3}, {{1.05635, 5.}, 1}}
Implementation:
Options[findRootBranches] = {
AccuracyGoal -> $MachinePrecision/2,
"SamplePoints" -> 100};
findRootBranches::usage =
"findRootBranches[f,{x0,x1}]: Find the the points in [x0,x1] \
where the number of real roots of a polynomial changes.
Returns list of {<interval>,<root count>} pairs.
f: Real -> Polynomial as pure function, e.g f=Function[x,#^2-x&]." ;
findRootBranches[f_, {xa_, xb_}, OptionsPattern[]] := Module[
{bisect, y, rootCount, acc = 10^-OptionValue[AccuracyGoal]},
rootCount[x_] := {x, CountRoots[f[x][y], y]};
(* Define a ecursive bisector w/ automatic subdivision *)
bisect[{{x1_, n1_}, {x2_, n2_}} /; Abs[x1 - x2] > acc] :=
Module[{x3, n3},
{x3, n3} = rootCount[(x1 + x2)/2];
Which[
n1 == n3, bisect[{{x3, n3}, {x2, n2}}],
n2 == n3, bisect[{{x1, n1}, {x3, n3}}],
True, {bisect[{{x1, n1}, {x3, n3}}],
bisect[{{x3, n3}, {x2, n2}}]}]];
(* Find initial brackets and bisect *)
Module[{xn, samplepoints, brackets},
samplepoints = N#With[{sp = OptionValue["SamplePoints"]},
If[NumberQ[sp], xa + (xb - xa) Range[0, sp]/sp, Union[{xa, xb}, sp]]];
(* Start by counting roots at initial sample points *)
xn = rootCount /# samplepoints;
(* Then, identify and refine the brackets *)
brackets = Flatten[bisect /#
Cases[Partition[xn, 2, 1], {{_, a_}, {_, b_}} /; a != b]];
(* Reinclude the endpoints and partition into same-rootcount segments: *)
With[{allpts = Join[{First#xn},
Flatten[brackets /. bisect -> List, 2], {Last#xn}]},
{#1, Last[#2]} & ### Transpose /# Partition[allpts, 2]
]]]

Resources