Numerical Solutions in Maxima - wolfram-mathematica

I have an equation in maxima and I want to find a numerical solution to the variable in this equation. I was wondering if anyone knew a what function in maxima operates most similar to that of FindRoot in mathematica?
There seems to be a couple of ways to do it but I am hoping to check results against that of a mathematica code, hence why I am interested in something similar to FindRoot.
Ben

Mathematica's FindRoot searches for a numerical solution. The closest thing in Maxima (AFAIK) is find-root, which finds a numerical solution in a given interval.
Example:
(%i4) find_root(x^2=7,x,0,100);
(%o4) 2.645751311064591

Related

Check whether a system of linear equations has a a solution

What is the fastest way to check whether a system of linear equations has a solution?
All numbers are rational numbers and the (large) coefficient matrix can be given in the form of SparseArray.
I know that LinearSolve can solve this problem, but if you don't need to know what the solution is, but only need to judge the existence, is there a more efficient method?
The way to calculate rank seems to be slower when there is no solution.
By the way, when I use 'LinearSolve', the form of 'SparseArray' can't help me get faster, even if there are only very limited non-zero elements in each row.
One idea is verifying
Det[PseudoInverse[m]] == 0.
where m is the square matrix of the coefficients.

Upgrading a binary search algorithm to something more sophisticated

I solved an analytically unsolvable problem with numerical methods. I am searching for X, based on a desired Y value. f(x)=y is possible, x=f^-1(y) is not.
Currently the algorithm does a binary search. It starts at X=50%, calculates Y, returns Y_err=Y-Y_demand. It keeps stepping by intervals of 5% in the direction of shrinking Y_err, until Y_err changes sign, then it reduces the step, and steps in the opposite direction. This works, but it's embarassingly slow & inefficient.
Below, an example chart of x=f^-1(y). I chose one with high coefficients for the nonlinear part.
Example chart of x=f^-1(y)
It varies depending on coefficients, but always has this pseudoparabolic shape. It's of course nonlinear and even 9th order polynomial approximations don't offer satisfactory precision.
For simplicity's sake let's say the inflecton point is at X=50%, and am looking only for solutions where X>50%.
How should I proceed? I'm looking to optimise as much as possible. What are some good algorithms? Thanks.
EDIT: Thank you for pointing out that this is not in fact a binary search. I've updated the code and now have much better results by comparison.
I'm not sure if Newton's method applies here, or at least I don't know how to apply it. One-way trial and error is all I can do. When I have some more time I will try to learn and implement regula falsi.

Sympy nsolve vs Mathematica NSolve for multivariate polynomial equations

An interesting feature regardin NSolve[] with Mathematica is that it seems to provide all the solutions it can find (and hopefully it is exhaustive). For instance, as stated in the examples:
NSolve[{x^2 + y^3 == 1, 2 x + 3 y == 4}, {x, y}]
would return an array of 3 solutions.
From what I could try, it seems to scale quite well even for, say, 20 multivariate polynomial equations with 20 variables as it can be seen in this notebook.
Alternatively, I am quite found of using Sympy which also features a kind of nsolve function.
But there is a catch: this function requires a starting point "x0" and it would possibly find only one solution - and still, provided you are lucky enough to have chosen a proper x0.
Some users suggested in the past to use a "multi-start method" where one would choose a grid of potential starting points and run nsolve multiple times. But this doesn't seem to fit with my problem: if the grid is of size d for one variable, then it would scale exponentially as 20^d starting points for my own problems of 20 variables. It doesn't seem to match with Mathematica which seems to run in a blink.
What is mathematica doing to achieve such a fast solving? Is it due to the nature of the equations? (Maybe some Groebner basis computations behind the scene)
Could it be done with Sympy?
Thank you for your help!

Curve Fitting - DataSet

I am given the following problem.
I have a Set of functions which are linear combinations of the following functions (f1,f2,f3....fn) and a noisy dataset of pairs (x,y). I want to find a function from my set which approximates the dataset the best.
They key to finding the solution is to find coefficients a1,a2...an so that the resulting function f=a1*f1...an*fn approximates y well given the input x. If the data wasnt noisy, I could just choose 5 points and solve the resulting system of equations but I dont think this would work well with noisy data.
How would one find the coefficients ?
(I am asking for an algorithm and not for a program, for example matlab, that does the job for me)
In presence of noise you need to find some approximation solution, that minimizes discrepancies with ideal solution.
Such best fit problems are usually solved by optimization algorithms.
Widely used one is Levenberg–Marquardt algorithm.

Prescribing strange boundary conditions

Does anyone know how to prescribe boundary conditions of like u[t,0,y]==u[t,1,1-y] in Mathematica using NDSolve... It always complains that the arguments of the dependent variable should literally match the independent variable.
Thanks in advance.
This symmetry condition can probably be recast in the form Derivative[0,1][u][x,1/2]==0. Of course, more information on the problem would be helpful.
Edit in response to rcollyer:
The algebraic identity f(x)=f(1-x) for all x in (0,1) implies a geometric symmetry: the graph of f will be symmetric about the line x=1/2. Now draw the graph of such a function; if it is differentiable, you will find that f'(1/2)=0.
Now, I don't know for sure that the OP's problem can be recast this way; it rather depends on the specifics of the problem. This situation frequently arises when dealing with PDEs on the disk where the function u is a function of polar coordinates r and theta. If the disk represents a clamped drum, then perhaps you've got u(1,t)=0. But, what of u(0,t)? If the function is symmetric and smooth, then u_x(0,t)=0 is a reasonable condition.

Resources