I'm trying to take the square root of a matrix. That is find the matrix B so B*B=A. None of the methods I've found around gives a working result.
First I found this formula on Wikipedia:
Set Y_0 = A and Z_0 = I then the iteration:
Y_{k+1} = .5*(Y_k + Z_k^{-1}),
Z_{k+1} = .5*(Z_k + Y_k^{-1}).
Then Y should converge to B.
However implementing the algorithm in python (using numpy for inverse matrices), gave me rubbish results:
>>> def denbev(Y,Z,n):
if n == 0: return Y,Z
return denbev(.5*(Y+Z**-1), .5*(Z+Y**-1), n-1)
>>> denbev(matrix('1,2;3,4'), matrix('1,0;0,1'), 3)[0]**2
matrix([[ 1.31969074, 1.85986159],
[ 2.78979239, 4.10948313]])
>>> denbev(matrix('1,2;3,4'), matrix('1,0;0,1'), 100)[0]**2
matrix([[ 1.44409972, 1.79685675],
[ 2.69528512, 4.13938485]])
As you can see, iterating 100 times, gives worse results than iterating three times, and none of the results get within a 40% error margin.
Then I tried the scipy sqrtm method, but that was even worse:
>>> scipy.linalg.sqrtm(matrix('1,2;3,4'))**2
array([[ 0.09090909+0.51425948j, 0.60606061-0.34283965j],
[ 1.36363636-0.77138922j, 3.09090909+0.51425948j]])
>>> scipy.linalg.sqrtm(matrix('1,2;3,4')**2)
array([[ 1.56669890+0.j, 1.74077656+0.j],
[ 2.61116484+0.j, 4.17786374+0.j]])
I don't know a lot about matrix square rooting, but I figure there must be algorithms that perform better than the above?
(1) the square root of the matrix [1,2;3,4] should give something complex, as the eigenvalues of that matrix are negative. SO your solution can't be correct to begin with.
(2) linalg.sqrtm returns an array, NOT a matrix. Hence, using * to multiply them is not a good idea. In your case, the solutions is thus correct, but you're not seeing it.
edit try the following, you'll see it's correct:
asmatrix(scipy.linalg.sqrtm(matrix('1,2;3,4')))**2
Your matrix [1 2; 3 4] isn't positive so there is no solution to the problem in the domain of real matrices.
What is the purpose of the matrix square root that you're doing? I suspect a practical application the matrix really could be symmetric positive definite (e.g. covariance) so you shouldn't encounter complex numbers.
In that case you can compute a cholesky decomposition, like a scaled LU factorization, see here: http://en.wikipedia.org/wiki/Cholesky_decomposition
Another practical example is if your matrices are rotations, then you can first decompose with matrix log and just divide by 2 in the log space, then go back to rotation with matrix exponent... in any event it sounds strange that you ask for a 'generic matrix square root', you probably want to understand the specific application in more depth.
Related
I'm trying to get a single element of an adjugate A_adj of a matrix A, both of which need to be symbolic expressions, where the symbols x_i are binary and the matrix A is symmetric and sparse. Python's sympy works great for small problems:
from sympy import zeros, symbols
size = 4
A = zeros(size,size)
x_i = [x for x in symbols(f'x0:{size}')]
for i in range(size-1):
A[i,i] += 0.5*x_i[i]
A[i+1,i+1] += 0.5*x_i[i]
A[i,i+1] = A[i+1,i] = -0.3*(i+1)*x_i[i]
A_adj_0 = A[1:,1:].det()
A_adj_0
This calculates the first element A_adj_0 of the cofactor matrix (which is the corresponding minor) and correctly gives me 0.125x_0x_1x_2 - 0.28x_2x_2^2 - 0.055x_1^2x_2 - 0.28x_1x_2^2, which is the expression I need, but there are two issues:
This is completely unfeasible for larger matrices (I need this for sizes of ~100).
The x_i are binary variables (i.e. either 0 or 1) and there seems to be no way for sympy to simplify expressions of binary variables, i.e. simplifying polynomials x_i^n = x_i.
The first issue can be partly addressed by instead solving a linear equation system Ay = b, where b is set to the first basis vector [1, 0, 0, 0], such that y is the first column of the inverse of A. The first entry of y is the first element of the inverse of A:
b = zeros(size,1)
b[0] = 1
y = A.LUsolve(b)
s = {x_i[i]: 1 for i in range(size)}
print(y[0].subs(s) * A.subs(s).det())
print(A_adj_0.subs(s))
The problem here is that the expression for the first element of y is extremely complicated, even after using simplify() and so on. It would be a very simple expression with simplification of binary expressions as mentioned in point 2 above. It's a faster method, but still unfeasible for larger matrices.
This boils down to my actual question:
Is there an efficient way to compute a single element of the adjugate of a sparse and symmetric symbolic matrix, where the symbols are binary values?
I'm open to using other software as well.
Addendum 1:
It seems simplifying binary expressions in sympy is possible with a simple custom substitution which I wasn't aware of:
A_subs = A_adj_0
for i in range(size):
A_subs = A_subs.subs(x_i[i]*x_i[i], x_i[i])
A_subs
You should make sure to use Rational rather than floats in sympy so S(1)/2 or Rational(1, 2) rather than 0.5.
There is a new (undocumented and for the moment internal) implementation of matrices in sympy called DomainMatrix. It is likely to be a lot faster for a problem like this and always produces polynomial results in a fully expanded form. I expect that it will be much faster for this kind of problem but it still seems to be fairly slow for this because is is not sparse internally (yet - that will probably change in the next release) and it does not take advantage of the simplification from the symbols being binary-valued. It can be made to work over GF(2) but not with symbols that are assumed to be in GF(2) which is something different.
In case it is helpful though this is how you would use it in sympy 1.7.1:
from sympy import zeros, symbols, Rational
from sympy.polys.domainmatrix import DomainMatrix
size = 10
A = zeros(size,size)
x_i = [x for x in symbols(f'x0:{size}')]
for i in range(size-1):
A[i,i] += Rational(1, 2)*x_i[i]
A[i+1,i+1] += Rational(1, 2)*x_i[i]
A[i,i+1] = A[i+1,i] = -Rational(3, 10)*(i+1)*x_i[i]
# Convert to DomainMatrix:
dM = DomainMatrix.from_list_sympy(size-1, size-1, A[1:, 1:].tolist())
# Compute determinant and convert back to normal sympy expression:
# Could also use dM.det().as_expr() although it might be slower
A_adj_0 = dM.charpoly()[-1].as_expr()
# Reduce powers:
A_adj_0 = A_adj_0.replace(lambda e: e.is_Pow, lambda e: e.args[0])
print(A_adj_0)
Imagine puzzle like this :
puzzle
I have several shapes, for example :
10 circles
8 triangles
9 squares
I also have some plates to put shapes, for example :
plate A : 2 circle holes, 3 triangle holes, 1 square holes
plate B : 1 circle holes, 0 triangle hole, 3 square holes
plate C : 2 circle holes, 2 triangle holes, 2 square holes
I want to find minimum numbers of plates to put shapes all (plates do not need to fill completely)
for example :
I can pick 6 plates [A, A, A, B, B, C], and I can insert all shapes
but I also can pick [A, A, C, C, C] and this is okay too,
so answer of this problem is : 5
If this problem generalized to N-types of shapes, and M-types of plates,
What is the best algorithm to solve this problem and what is time complexity of the answer?
This problem is a NP-hard problem, it is easier to see it once you realize that there is a very simple polynomial time reduction from the bin packing problem to this problem.
What I would suggest is for you to use integer linear programming techniques in order to solve it.
An ILP that solves your problem can be the following:
// Data
Shapes // array of integers of size n, contains the number of each shape to fit
Plates // 2D array of size n * m, Plates[i][j] represents the number of shape of type i
// that fit on a plate of type j
// Decision variables
X // array of integer of size m, will represent the number of plates of each type to use
// Constraints
For all j in 1 .. m, X[j] >= 0 // number of plates cannot be negative
For all i in 1 .. n, sum(j in 1..m) Plates[i][j] * X[j] >= Shapes[i] // all shapes must fit
Objective function:
minimize sum(j in 1..n) X[j]
Write the pseudo code in OPL, feed it to a linear programming solver, and you should get a solution reasonably fast, given the similarity of this problem with bin packing.
Edit: if you do not want to go though the trouble of learning LP basics, OPL, LP solvers, etc .... then the best and easiest approach for this problem would be a good old branch and bound implementation of this problem. Branch and bound is a very simple and powerful algorithm that can be used to solve a wide range of problem .... a must-know.
A solution to this problem should be done using dynamic programming I think.
Here is a solution in pseudo-code (I haven't tested it, but I think it should work):
parts = the number of shapes we want to fit as a vector
plates = the of plates we can use as a matrix (vector of vector)
function findSolution(parts, usedPlates):
if parts < 0: //all elements < 0
return usedPlates;
else:
bestSolution = null //or anything that shows that there is no solution yet
for X in plates:
if (parts > 0 on any index where X is > 0): //prevents an infinite loop (or stack overflow because of the recursion) that would occur using only e.g. the plate B from your question
used = findParts(parts - X, used.add(X)); //elementwise subtraction; recursion
if (used.length < best.length):
//the solution is better than the current best one
best = used;
//return the best solution that was found
return best
using the values from your question the initial variables would be:
parts = [10, 8, 9]
plates = [[2, 3, 1], [1, 0, 3], [2, 2, 2]]
and you would start the function like this:
solution = findSolution(parts /*= [10, 8, 9]*/, new empty list);
//solution would probably be [A, A, C, C, C], but also [C, C, C, C, C] would be possible (but in every case the solution has the optimal length of 5)
Using this algorithm you divide the problem in smaller problems using recursion (which is what most dynamic programming algorithms do).
The time complexity of this is not realy good, because you have to search every possible solution.
According to the master theorem the time complexity should be something like: O(n^(log_b(a))) where n = a = the number of plates used (in your example 3). b (the base of the logarithm) can't be calculated here (or at least I don't know how) but I assume it would be close to 1 which makes it a quite big exponent. But it also depends on the size of the entries in the parts vector and the entries in the plates vectores (less plates needed -> better time complexity, much plates needed -> bad time complexity).
So the time complexity is not very good. For bigger problems this will take very very long, but for small problems like in your question it should work.
I'm trying to create a program that takes a square (n-by-n) matrix as input, and if it is invertible, will LU decompose the matrix using Gaussian Elimination.
Here is my problem: in class we learned that it is better to change rows so that your pivot is always the largest number (in absolute value) in its column. For example, if the matrix was A = [1,2;3,4] then switching rows it is [3,4;1,2] and then we can proceed with the Gaussian elimination.
My code works properly for matrices that don't require row changes, but for ones that do, it does not. This is my code:
function newgauss(A)
[rows,columns]=size(A);
P=eye(rows,columns); %P is permutation matrix
if(det(A)==0) %% determinante is 0 means no single solution
disp('No solutions or infinite number of solutions')
return;
end
U=A;
L=eye(rows,columns);
pivot=1;
while(pivot<rows)
max=abs(U(pivot,pivot));
maxi=0;%%find maximum abs value in column pivot
for i=pivot+1:rows
if(abs(U(i,pivot))>max)
max=abs(U(i,pivot));
maxi=i;
end
end %%if needed then switch
if(maxi~=0)
temp=U(pivot,:);
U(pivot,:)=U(maxi,:);
U(maxi,:)=temp;
temp=P(pivot,:);
P(pivot,:)=P(maxi,:);
P(maxi,:)=temp;
end %%Grade the column pivot using gauss elimination
for i=pivot+1:rows
num=U(i,pivot)/U(pivot,pivot);
U(i,:)=U(i,:)-num*U(pivot,:);
L(i,pivot)=num;
end
pivot=pivot+1;
end
disp('PA is:');
disp(P*A);
disp('LU is:');
disp(L*U);
end
Clarification: since we are switching rows, we are looking to decompose P (permutation matrix) times A, and not the original A that we had as input.
Explanation of the code:
First I check if the matrix is invertible, if it isn't, stop. If it is, pivot is (1,1)
I find the largest number in column 1, and switch rows
Grade column 1 using Gaussian elimination, turning all but the spot (1,1) to zero
Pivot is now (2,2), find largest number in column 2... Rinse, repeat
Your code seems to work fine from what I can tell, at least for the basic examples A=[1,2;3,4] or A=[3,4;1,2]. Change your function definition to:
function [L,U,P] = newgauss(A)
so you can output your calculated values (much better than using disp, but this shows the correct results too). Then you'll see that P*A = L*U. Maybe you were expecting L*U to equal A directly? You can also confirm that you are correct via Matlab's lu function:
[L,U,P] = lu(A);
L*U
P*A
Permutation matrices are orthogonal matrices, so P−1 = PT. If you want to get back A in your code, you can do:
P'*L*U
Similarly, using Matlab's lu with the permutation matrix output, you can do:
[L,U,P] = lu(A);
P'*L*U
(You should also use error or warning rather than how you're using disp in checking the determinant, but they probably don't teach that.)
Note that the det function is implemented using an LU decomposition itself to compute the determinant... recursive anyone :)
Aside from that, there is a reminder towards the end of the page which suggest using cond instead of det to test for matrix singularity:
Testing singularity using abs(det(X)) <= tolerance is not
recommended as it is difficult to choose the correct tolerance. The
function cond(X) can check for singular and nearly singular
matrices.
COND uses the singular value decomposition (see its implementation: edit cond.m)
For anyone finding this in the future and needing a working solution:
The OP's code doesn't contain the logic for switching elements in L when creating the permutation matrix P. The adjusted code that gives the same output as Matlab's lu(A) function is:
function [L,U,P] = newgauss(A)
[rows,columns]=size(A);
P=eye(rows,columns); %P is permutation matrix
tol = 1E-16; % I believe this is what matlab uses as a warning level
if( rcond(A) <= tol) %% bad condition number
error('Matrix is nearly singular')
end
U=A;
L=eye(rows,columns);
pivot=1;
while(pivot<rows)
max=abs(U(pivot,pivot));
maxi=0;%%find maximum abs value in column pivot
for i=pivot+1:rows
if(abs(U(i,pivot))>max)
max=abs(U(i,pivot));
maxi=i;
end
end %%if needed then switch
if(maxi~=0)
temp=U(pivot,:);
U(pivot,:)=U(maxi,:);
U(maxi,:)=temp;
temp=P(pivot,:);
P(pivot,:)=P(maxi,:);
P(maxi,:)=temp;
% change elements in L-----
if pivot >= 2
temp=L(pivot,1:pivot-1);
L(pivot,1:pivot-1)=L(maxi,1:pivot-1);
L(maxi,1:pivot-1)=temp;
end
end %%Grade the column pivot using gauss elimination
for i=pivot+1:rows
num=U(i,pivot)/U(pivot,pivot);
U(i,:)=U(i,:)-num*U(pivot,:);
L(i,pivot)=num;
end
pivot=pivot+1;
end
end
Hope this helps someone stumbling upon this in the future.
I want to make a linear fit to few data points, as shown on the image. Since I know the intercept (in this case say 0.05), I want to fit only points which are in the linear region with this particular intercept. In this case it will be lets say points 5:22 (but not 22:30).
I'm looking for the simple algorithm to determine this optimal amount of points, based on... hmm, that's the question... R^2? Any Ideas how to do it?
I was thinking about probing R^2 for fits using points 1 to 2:30, 2 to 3:30, and so on, but I don't really know how to enclose it into clear and simple function. For fits with fixed intercept I'm using polyfit0 (http://www.mathworks.com/matlabcentral/fileexchange/272-polyfit0-m) . Thanks for any suggestions!
EDIT:
sample data:
intercept = 0.043;
x = 0.01:0.01:0.3;
y = [0.0530642513911393,0.0600786706929529,0.0673485248329648,0.0794662409166333,0.0895915873196170,0.103837395346484,0.107224784565365,0.120300492775786,0.126318699218730,0.141508831492330,0.147135757370947,0.161734674733680,0.170982455701681,0.191799936622712,0.192312642057298,0.204771365716483,0.222689541632988,0.242582251060963,0.252582727297656,0.267390860166283,0.282890010610515,0.292381165948577,0.307990544720676,0.314264952297699,0.332344368808024,0.355781519885611,0.373277721489254,0.387722683944356,0.413648156978284,0.446500064130389;];
What you have here is a rather difficult problem to find a general solution of.
One approach would be to compute all the slopes/intersects between all consecutive pairs of points, and then do cluster analysis on the intersepts:
slopes = diff(y)./diff(x);
intersepts = y(1:end-1) - slopes.*x(1:end-1);
idx = kmeans(intersepts, 3);
x([idx; 3] == 2) % the points with the intersepts closest to the linear one.
This requires the statistics toolbox (for kmeans). This is the best of all methods I tried, although the range of points found this way might have a few small holes in it; e.g., when the slopes of two points in the start and end range lie close to the slope of the line, these points will be detected as belonging to the line. This (and other factors) will require a bit more post-processing of the solution found this way.
Another approach (which I failed to construct successfully) is to do a linear fit in a loop, each time increasing the range of points from some point in the middle towards both of the endpoints, and see if the sum of the squared error remains small. This I gave up very quickly, because defining what "small" is is very subjective and must be done in some heuristic way.
I tried a more systematic and robust approach of the above:
function test
%% example data
slope = 2;
intercept = 1.5;
x = linspace(0.1, 5, 100).';
y = slope*x + intercept;
y(1:12) = log(x(1:12)) + y(12)-log(x(12));
y(74:100) = y(74:100) + (x(74:100)-x(74)).^8;
y = y + 0.2*randn(size(y));
%% simple algorithm
[X,fn] = fminsearch(#(ii)P(ii, x,y,intercept), [0.5 0.5])
[~,inds] = P(X, y,x,intercept)
end
function [C, inds] = P(ii, x,y,intercept)
% ii represents fraction of range from center to end,
% So ii lies between 0 and 1.
N = numel(x);
n = round(N/2);
ii = round(ii*n);
inds = min(max(1, n+(-ii(1):ii(2))), N);
% Solve linear system with fixed intercept
A = x(inds);
b = y(inds) - intercept;
% and return the sum of squared errors, divided by
% the number of points included in the set. This
% last step is required to prevent fminsearch from
% reducing the set to 1 point (= minimum possible
% squared error).
C = sum(((A\b)*A - b).^2)/numel(inds);
end
which only finds a rough approximation to the desired indices (12 and 74 in this example).
When fminsearch is run a few dozen times with random starting values (really just rand(1,2)), it gets more reliable, but I still wouln't bet my life on it.
If you have the statistics toolbox, use the kmeans option.
Depending on the number of data values, I would split the data into a relative small number of overlapping segments, and for each segment calculate the linear fit, or rather the 1-st order coefficient, (remember you know the intercept, which will be same for all segments).
Then, for each coefficient calculate the MSE between this hypothetical line and entire dataset, choosing the coefficient which yields the smallest MSE.
Wikipedia says we can approximate Bark scale with the equation:
b(f) = 13*atan(0.00076*f)+3.5*atan(power(f/7500,2))
How can I divide frequency spectrum into n intervals of the same length on Bark scale (interval division points will be equidistant on Bark scale)?
The best way would be to analytically inverse function (express x by function of y). I was trying doing it on paper but failed. WolframAlpha search bar couldn't do it also. I tried Octave finverse function, but I got error.
Octave says (for simpler example):
octave:2> x = sym('x');
octave:3> finverse(2*x)
error: `finverse' undefined near line 3 column 1
This is finverse description from Matlab: http://www.mathworks.com/help/symbolic/finverse.html
There could be also numerical way to do it. I can imagine that you just start from dividing the y axis equally and search for ideal division by binary search. But maybe there are some existing tools that do it?
You need to numerically solve this equation (there is no analytical inverse function). Set values for b equally spaced and solve the equation to find the various f. Bissection is somewhat slow but a very good alternative is Brent's method. See http://en.wikipedia.org/wiki/Brent%27s_method
This function can't be inverted analytically. You'll have to use some numerical procedure. Binary search would be fine, but there are more efficient ways to do these sorts of things: look into root-finding algorithms. You can apply your algorithm of choice to the equation b(f) = f_n for each of the frequency interval endpoints f_n.
Just so you know, in (say) octave to implement rpsmi's or David Zaslavsky's answer, you'd do something like this:
global x0 = 0.
function res = b(f)
global x0
res = 13*atan(0.00076*f)+3.5*atan(power(f/7500,2)) - x0
end
function [intervals, barks] = barkintervals(left, right, n)
global x0
intervals = linspace(left, right, n);
barks = intervals;
for i = 1:n
x0 = intervals(i);
# 125*x0 is just a crude guess starting point given the values
[barks(i), fval, info] = fsolve('b', 125*x0);
endfor
end
and run it like so:
octave:1> barks
octave:2> [i,bx] = barkintervals(0, 10, 10)
[... lots of output from fsolve deleted...]
i =
Columns 1 through 8:
0.00000 1.11111 2.22222 3.33333 4.44444 5.55556 6.66667 7.77778
Columns 9 and 10:
8.88889 10.00000
bx =
Columns 1 through 6:
0.0000e+00 1.1266e+02 2.2681e+02 3.4418e+02 4.6668e+02 5.9653e+02
Columns 7 through 10:
7.3639e+02 8.8960e+02 1.0605e+03 1.2549e+03
I finally decided not to use the Bark values approximation but ideal values for critical bands centres (defined for n=1..24). I plotted them with gnuplot and on the same graph I plotted arbitrarily chosen values for points of greater density (for the required n>24). I adjusted the points values in Hz till the the both curves were approximately the same.
Of course rpsmi and David Zaslavsky answers are more general and scalable.