Just as a silly example, say that I wish to solve for the following nonlinear equation x^2 - F(c)=0, where c can take different values between zero and one and F is a standard normal CDF. If I wish to solve for one particular value of c, I would use the following code:
c = linspace(0,1,100);
L = length(c);
x0 = c;
function Y = eq(x)
Y = x^2 - cdfnor("PQ",x-c(1),0,1)
endfunction
xres = fsolve(x0(1),eq);
My question is: Is there a way to solve for the equation for each value of c (and not only c(1))? Specifically, if I can use a loop over fsolve? If so, how?
Just modify your script like this:
c = linspace(0,1,100);
L = length(c);
x0 = c;
function Y = eq(x)
Y = x^2 - cdfnor("PQ",x-c,zeros(c),ones(c))
endfunction
xres = fsolve(x0,eq);
Related
My modified block of code from here works for XOR'ing python lists via using functions (XOR and AND) of the Sympy library (first block of code below). However, I am stumped on how to iterate via sympy matrices (second block of code below).
The python lists code that works is:
from sympy import And, Xor
from sympy.logic import SOPform, simplify_logic
from sympy import symbols
def LogMatrixMult (A, B):
rows_A = len(A)
cols_A = len(A[0])
rows_B = len(B)
cols_B = len(B[0])
if cols_A != rows_B:
print ("Cannot multiply the two matrices. Incorrect dimensions.")
return
# Create the result matrix
# Dimensions would be rows_A x cols_B
C = [[0 for row in range(cols_B)] for col in range(rows_A)]
for i in range(rows_A):
for j in range(cols_B):
for k in range(cols_A):
# I can add Sympy's in simplify_logic(-)
C[i][j] = Xor(C[i][j], And(A[i][k], B[k][j]))
return C
b, c, d, e, f, w, x, y, z = symbols('b c d e f w x y z')
m1 = [[b,c,d,e]]
m2 = [[w,x],[x,z],[y,z],[z,w]]
result = simplify_logic(LogMatrixMult(m1, m2)[0][0])
print(result)
In the block below using Sympy matrices note that the i,j,k and C, A, B definitions is from me trying to modify to use the iterator, I don't know if this needed or correct.
from sympy import And, Xor
from sympy.matrices import Matrix
from sympy.logic import SOPform, simplify_logic
from sympy import symbols, IndexedBase, Idx
from sympy import symbols
def LogMatrixMultArr (A, B):
rows_A = A.rows
cols_A = A.cols
rows_B = B.rows
cols_B = B.cols
i,j,k = symbols('i j k', cls=Idx)
C = IndexedBase('C')
A = IndexedBase('A')
B = IndexedBase('B')
if cols_A != rows_B:
print ("Cannot multiply the two matrices. Incorrect dimensions.")
return
# Create the result matrix
# Dimensions would be rows_A x cols_B
C = [[0 for row in range(cols_B)] for col in range(rows_A)]
for i in range(rows_A):
for j in range(cols_B):
for k in range(cols_A):
# I can add Sympy's in simplify_logic(-)
C[i,j] = Xor(C[i,j], And(A[i,k], B[k,j]))
# C[i][j] = Xor(C[i][j],And(A[i][k],B[k][j]))
return C
b, c, d, e, f, w, x, y, z = symbols('b c d e f w x y z')
P = Matrix([w,x]).reshape(1,2)
Q = Matrix([y,z])
print(LogMatrixMultArr(P,Q))
The error I get is: TypeError: list indices must be integers or slices, not tuple
C[i,j] = Xor(C[i,j], And(A[i,k], B[k,j]))
Now I believe I have to do something with some special way of sympy's iterating but am stuck on how to get it to work in the code - if I do even need this methodology.
Also, if anyone knows how to do something such as the above using XOR and And (non-bitwise) instead of + and * operators in a faster way, please do share.
Thanks.
I think the problem is with IndexedBase objects. I'm not competent on these but it seems you do not use them right. If you replace
i,j,k = symbols('i j k', cls=Idx)
C = IndexedBase('C')
A = IndexedBase('A')
B = IndexedBase('B')
by
C = zeros(rows_A, cols_B)
and remove C = [[0 for row in range(cols_B)] for col in range(rows_A)], then it works.
I'm struggling to figure out an algorithm to find the intersection of two linear equations like:
f(x)=2x+4
g(x)=x+2
I'd like to use the method where you set f (x)=g (x) and solve x, and I'd like to stay away from cross product.
Does anyone have any suggestion to how an algorithm like that would look like?
If your input lines are in slope-intercept form, an algorithm is an over-kill as there is a direct formula to calculate their point of intersection. It's given on a Wikipedia page and you can understand it as explained below.
Given the equations of the lines: The x and y coordinates of the
point of intersection of two non-vertical lines can easily be found
using the following substitutions and rearrangements.
Suppose that two lines have the equations y = ax + c and y = bx + d where a
and b are the slopes (gradients) of the lines and where c and d are
the y-intercepts of the lines. At the point where the two lines
intersect (if they do), both y coordinates will be the same, hence the
following equality:
ax + c = bx + d.
We can rearrange this expression in order to extract the
value of x,
ax - bx = d - c, and so,
x = (d-c)/(a-b).
To find the y coordinate, all we need to do is substitute the value of x into > either one of the two line equations. For example, into the first:
y=(a*(d-c)/(a-b))+c.
Hence, the Point of Intersection is {(d-c)/(a-b), (a*(d-c)/(a-b))+c}
Note: If a = b then the two lines are parallel. If c ≠ d as well, the lines
are different and there is no intersection, otherwise the two lines are
identical.
Given:
ax + b = cx + d
ax = cx + d - b
ax - cx = d - b
x(a - c) = d - b
Therefore, x = (d - b) / (a - c)
In your example, let a = 2, b = 4, c = 1 d = 2
x = (2 - 4) / (2 - 1)
x = -2 / 1
x = -2
General solution. Let
f(x) = a1x + b1 ....... g(x) = a2x + b2
Special cases:
a1 == a2 and b1 == b2 : lines coincide
a1 == a2 and b1 != b2 : lines are parallel, no intersection
General case: a1 != a2
X = (b2 - b1) / (a1 - a2) ....and... Y = (a1b2 - a2b1) / (a1 - a2)
I don't remember what cross products are in the context of equations.
One way to solve these is to set them equal to each other, solve for x, then use that value to solve for y:
2x + 4 = x + 2
2x + 2 = x
x = -2
y = f(x)
= g(x)
= x + 2
= -2 + 2
= 0
Solution: (-2, 0)
[edit] The part about "f" is solved. Here is what I did:
Instead of using:
X = (F * W' - Y);
f = X' * X;
I'm now using:
X = F*W;
A = X'*F*W;
B = -2*X'*Y;
Y1 = Y'*Y;
f = A + B + Y1
This will give a massive speed up. Still, the problem with the Hessian of f remains.
[/edit]
So, I'm having some serious performance "problems" with a quadratic optimization problem I'm trying so solve in Matlab. The problem is not the optimization per se, but the calculation of the target function and the Hessian. Right now it looks like this (F and Y aren't random at all and will have real data, also it is not neccesarily unconstrainted, because then the solution would of course be (F'F)^-1*F'*Y):
W_a = sym('w_a_%d', [1 96]);
W_b = sym('w_b_%d', [1 96]);
for i = 1:96
W(1,2*(i-1)+1) = W_a(1,i);
W(1,2*i) = W_b(1,i);
end
F = rand(10000,192);
Y = rand(10000,1);
q = [];
for i = 1:192
q = [q sum(-Y(:).*F(:,i))];
end
q = 2*q;
q = double(q);
X = (F * W' - Y);
f = X' * X;
H = hessian(f);
H = double(H);
A=[]; b=[];
Aeq=[]; beq=[];
lb=[]; ub=[];
options=optimset('Algorithm', 'active-set', 'Display', 'off');
[xsol,~,exitflag,output]=quadprog(H, q, A, b, Aeq, beq, lb, ub, [], options);
The thing is: calculating f and H takes like forever.
I'm not expecting that there are ways to significantly speed this up, since Matlab is optimized for stuff like this. But maybe someone knows some open license software, that's almost as fast as Matlab, so that I could calculate f and H with that software on a faster machine (which unfortunately has no Matlab license ...) and then let Matlab do the optimization.
Right now I'm kinda lost in this :/
Thank you very much in advance. Even some keywords could help me here like "Look for software xy"
If speed is your concern, using symbolic methods is usually the wrong approach (especially for large systems or if you need to run something repeatedly). You'll need to calculate your Hessian numerically. There's an excellent utility on the MathWorks FileExchange that can do this for you: the DERIVESTsuite. It includes a numeric hessian function. You'll need to formulate your f as a function of X.
In chapter 1 on fixed points, the book says we can find fixed points of certain functions using
f(x) = f(f(x)) = f(f(f(x))) ....
What are those functions?
It doesn't work for y = 2y when i rewrite it as y = y/2 it works
Does y need to get smaller everytime? Or are there any general attributes that a function has to have to find fixed points by that method?
What conditions it should satisfy to work?
According to the Banach fixed-point theorem, such a point exists iff the mapping (function) is a contraction. That means that, for example, y=2x doesn't have fixed point and y = 0,999... * x has. In general, if f maps [a,b] to [a,b], then |f(x) - f(y)| should be equal to c * |x - y| for some 0 <= c < 1 (for all x, y from [a, b]).
Say you have:
f(x) = sin(x)
then x = 0 is a fixed point of the function since:
f(0) = sin(0) = 0
f(f(0)) = sin(sin(0)) = sin(0) = 0
Not every point along x is a fixed point of sin, only 0 is.
Different functions have different fixed points, if at all. You can find more on fixed points of functions at Wikidpedia
I have a vector X of 20 real numbers and a vector Y of 20 real numbers.
I want to model them as
y = ax^2+bx + c
How to find the value of 'a' , 'b' and 'c'
and best fit quadratic equation.
Given Values
X = (x1,x2,...,x20)
Y = (y1,y2,...,y20)
i need a formula or procedure to find following values
a = ???
b = ???
c = ???
Thanks in advance.
Everything #Bartoss said is right, +1. I figured I just add a practical implementation here, without QR decomposition. You want to evaluate the values of a,b,c such that the distance between measured and fitted data is minimal. You can pick as measure
sum(ax^2+bx + c -y)^2)
where the sum is over the elements of vectors x,y.
Then, a minimum implies that the derivative of the quantity with respect to each of a,b,c is zero:
d (sum(ax^2+bx + c -y)^2) /da =0
d (sum(ax^2+bx + c -y)^2) /db =0
d (sum(ax^2+bx + c -y)^2) /dc =0
these equations are
2(sum(ax^2+bx + c -y)*x^2)=0
2(sum(ax^2+bx + c -y)*x) =0
2(sum(ax^2+bx + c -y)) =0
Dividing by 2, the above can be rewritten as
a*sum(x^4) +b*sum(x^3) + c*sum(x^2) =sum(y*x^2)
a*sum(x^3) +b*sum(x^2) + c*sum(x) =sum(y*x)
a*sum(x^2) +b*sum(x) + c*N =sum(y)
where N=20 in your case. A simple code in python showing how to do so follows.
from numpy import random, array
from scipy.linalg import solve
import matplotlib.pylab as plt
a, b, c = 6., 3., 4.
N = 20
x = random.rand((N))
y = a * x ** 2 + b * x + c
y += random.rand((20)) #add a bit of noise to make things more realistic
x4 = (x ** 4).sum()
x3 = (x ** 3).sum()
x2 = (x ** 2).sum()
M = array([[x4, x3, x2], [x3, x2, x.sum()], [x2, x.sum(), N]])
K = array([(y * x ** 2).sum(), (y * x).sum(), y.sum()])
A, B, C = solve(M, K)
print 'exact values ', a, b, c
print 'calculated values', A, B, C
fig, ax = plt.subplots()
ax.plot(x, y, 'b.', label='data')
ax.plot(x, A * x ** 2 + B * x + C, 'r.', label='estimate')
ax.legend()
plt.show()
A much faster way to implement solution is to use a nonlinear least squares algorithm. This will be faster to write, but not faster to run. Using the one provided by scipy,
from scipy.optimize import leastsq
def f(arg):
a,b,c=arg
return a*x**2+b*x+c-y
(A,B,C),_=leastsq(f,[1,1,1])#you must provide a first guess to start with in this case.
That is a linear least squares problem. I think the easiest method which gives accurate results is QR decomposition using Householder reflections. It is not something to be explained in a stackoverflow answer, but I hope you will find all that is needed with this links.
If you never heard about these before and don't know how it connects with you problem:
A = [[x1^2, x1, 1]; [x2^2, x2, 1]; ...]
Y = [y1; y2; ...]
Now you want to find v = [a; b; c] such that A*v is as close as possible to Y, which is exactly what least squares problem is all about.