I'm trying to parallelize a simulation of an Ising 2D model to get some expected values as a function of the temperature of the system. For L=48, the one-threaded version takes about 240 seconds to run 20 temperatures and 1 seed each, but the parallelized version takes about 268 seconds, which is similar.
If you take the time per seed per temperature, it results in 12 seconds for the one-threaded version and 13.4 seconds for the parallelized version. I'm looking for help with my code because I don't understand these durations. I thought that the parallelized version would split one temperature among all threads and therefore should take about 30 seconds to complete.
I need to run the simulation for 50 temperatures and 200 seeds each, for 5 values of L. It would be helpful to reduce the compute time, because otherwise it could take 20 hours for L=48 and some days for L=72.
I'm using an i7-10700KF (8 cores, 16 logical threads).
program Ising
use omp_lib
implicit none
integer L, seed, i, j, seed0, nseed,k
parameter (L=48)
integer s(1:L, 1:L)
integer*4 pbc(0:L+1), mctot, N, mcd, mcini, difE
real*8 genrand_real2, magne, energ, energia, temp, temp1, DE
real*8 mag, w(-8:8)
real*8 start, finish
real*8 sum, sume, sume2, summ, summ2, sumam, vare, varm, maxcv, maxx
real*8 cv, x, Tmaxcv, Tmaxx
integer irand, jrand
11 format(10(f20.6))
! Initialize variables
mctot = 80000
mcd = 20
mcini = 8000
N = L*L
seed0 = 20347880
nseed = 20
maxcv=0.d0
maxx=0.d0
! Initialize vector pbc
pbc(0) = L
pbc(L+1) = 1
do i = 1, L
pbc(i) = i
end do
! Initialize matrix s with random values
do i = 1, L
do j = 1, L
if (genrand_real2() < 0.5) then
s(i,j) = 1
else
s(i,j) = -1
endif
end do
end do
! Metropolis algorithm
open(1, file='Expectation values.dat')
start = omp_get_wtime()
write(1,*) '#Temp, ','E, ','E2, ','M, ','M2, ','|M|, ','VarE, ','VarM, ',&
'Cv, ','X, '
!Start loop to calculate for different temperatures
!$OMP PARALLEL PRIVATE(s,seed,w,energia,difE,irand,jrand,temp,mag,sum,sume,sume2,summ,summ2,sumam,vare,varm,cv,x)
temp1 = 1.59d0
!$OMP DO ordered schedule(dynamic)
do k = 1, 10
temp = temp1 + (0.01d0*k)
!Define the matrix w, which contains the values of the Boltzmann function for each temperature, so as not to have to calculate them each iteration
do i = -8, 8
w(i) = dexp(-i/temp)
end do
write(*,*) "Temperature: ", temp, "Thread", omp_get_thread_num()
sum = 0.d0
sume = 0.d0
sume2 = 0.d0
summ = 0.d0
summ2 = 0.d0
sumam = 0.d0
do seed = seed0, seed0 + nseed-1, 1
call init_genrand(seed)
call reinicia(s,l)
energia = energ(s,l,pbc)
do i = 1, mctot
do j = 1, N
irand = int(genrand_real2()*L) + 1
jrand = int(genrand_real2()*L) + 1
difE = int(DE(s,l,irand,jrand,pbc))
if (difE < 0) then
s(irand,jrand) = -s(irand,jrand)
energia = energia + difE
else if (genrand_real2() < w(int(difE))) then
s(irand,jrand) = -s(irand,jrand)
energia = energia + difE
endif
end do
if ((i > mcini).and.(mcd*(i/mcd)==i)) then
mag= magne(s,l)
sum = sum + 1.d0
sume = sume + energia
sume2 = sume2 + energia**2
summ = summ + mag
summ2 = summ2 + mag**2
sumam = sumam + abs(mag)
endif
end do
end do
!Energy
sume=sume/(sum*N)
sume2=sume2/(sum*N*N)
!Magnetitzation
summ = summ/(sum*N)
sumam=sumam/(sum*N)
summ2=summ2/(sum*N*N)
!Variances
vare = dsqrt(sume2-sume*sume)/dsqrt(sum)
varm = dsqrt(summ2-summ*summ)/dsqrt(sum)
!Cv
cv = (N*(sume2-sume*sume))/temp**2
if (cv.gt.maxcv) then
maxcv=cv
Tmaxcv=temp
endif
!X
x = (N*(summ2-summ*summ))/temp
if (x.gt.maxx) then
maxx=x
Tmaxx=temp
endif
write(1,11) temp,sume,sume2,summ,summ2,sumam,vare,varm,cv,x
end do
!$OMP END DO
!$OMP END PARALLEL
finish = omp_get_wtime()
close(1)
print*, "Time: ",(finish-start),"Seconds"
end program Ising
! Functions
!Function that calculates the energy of the matrix s
real*8 function energ(S,L, pbc)
implicit none
integer s(1:L, 1:L), i, j, L
integer*4 pbc(0:L+1)
real*8 ene
ene = 0.0d0
do i = 1, L
do j = 1, L
ene = ene - s(i,j) * s(pbc(i+1),j) - s(i,j) * s(i,pbc(j+1))
end do
end do
energ = ene
return
end function energ
!Function that calculates the difference in energy that occurs when the spin of position (i, j) is changed
real*8 function DE(S,L,i,j,pbc)
implicit none
integer s(1:L, 1:L), i, j, L, difE
integer*4 pbc(0:L+1)
real*8 suma
difE = 0
suma = 0.0d0
suma = suma + s(pbc(i-1),j) + s(pbc(i+1),j) + s(i,pbc(j-1)) + s(i,pbc(j+1))
difE = difE + int(2 * s(i,j) * suma)
DE = difE
return
end function DE
!Function that calculates the magnetization of the matrix s
real*8 function magne(S,L)
implicit none
integer s(1:L, 1:L),L
magne = sum(s)
return
end function magne
! SUBRUTINES
!Subroutine that resets the matrix s with random values
subroutine reinicia(S,L)
implicit none
integer s(1:L, 1:L), i,j,L
real*8 genrand_real2
do i = 1, L
do j = 1, L
if (genrand_real2() < 0.5) then
s(i,j) = 1
else
s(i,j) = -1
endif
end do
end do
return
end subroutine
I have tried parallelizing the seeds loop instead of the temperatures, but it lasts almost the same, so i think i'm not parallelizing it correctly, because it looks a nice code to parallelize.
The other option I thought of is to manually parallelize the simulation. I could do this by compiling 16 programs, each of which handles a different range of temperatures. Then I could run all of the programs concurrently, so each program would get its own thread. However, this approach would require a lot of extra RAM.
Actually i have two intersecting circles as specified in the figure
i want to find the area of each part separately using Monte carlo method in Matlab .
The code doesn't draw the rectangle or the circles correctly so
i guess what is wrong is my calculation for the x and y and i am not much aware about the geometry equations for solving it so i need help about the equations.
this is my code so far :
n=1000;
%supposing that a rectangle will contain both circles so :
% the mid point of the distance between 2 circles will be (0,6)
% then by adding the radius of the left and right circles the total distance
% will be 27 , 11 from the left and 16 from the right
% width of rectangle = 24
x=27.*rand(n-1)-11;
y=24.*rand(n-1)+2;
count=0;
for i=1:n
if((x(i))^2+(y(i))^2<=25 && (x(i))^2+(y(i)-12)^2<=100)
count=count+1;
figure(2);
plot(x(i),y(i),'b+')
hold on
elseif(~(x(i))^2+(y(i))^2<=25 &&(x(i))^2+(y(i)-12)^2<=100)
figure(2);
plot(x(i),y(i),'y+')
hold on
else
figure(2);
plot(x(i),y(i),'r+')
end
end
Here are the errors I found:
x = 27*rand(n,1)-5
y = 24*rand(n,1)-12
The rectangle extents were incorrect, and if you use rand(n-1) will give you a (n-1) by (n-1) matrix.
and
first If:
(x(i))^2+(y(i))^2<=25 && (x(i)-12)^2+(y(i))^2<=100
the center of the large circle is at x=12 not y=12
Second If:
~(x(i))^2+(y(i))^2<=25 &&(x(i)-12)^2+(y(i))^2<=100
This code can be improved by using logical indexing.
For example, using R, you could do (Matlab code is left as an excercise):
n = 10000
x = 27*runif(n)-5
y = 24*runif(n)-12
plot(x,y)
r = (x^2 + y^2)<=25 & ((x-12)^2 + y^2)<=100
g = (x^2 + y^2)<=25
b = ((x-12)^2 + y^2)<=100
points(x[g],y[g],col="green")
points(x[b],y[b],col="blue")
points(x[r],y[r],col="red")
which gives:
Here is my generic solution for any two circles (without any hardcoded value):
function [ P ] = circles_intersection_area( k1, k2, N )
%CIRCLES_INTERSECTION_AREA Summary...
% Adnan A.
x1 = k1(1);
y1 = k1(2);
r1 = k1(3);
x2 = k2(1);
y2 = k2(2);
r2 = k2(3);
if sqrt((x1-x2)*(x1-x2) + (y1-y2)*(y1-y2)) >= (r1 + r2)
% no intersection
P = 0;
return
end
% Wrapper rectangle config
a_min = x1 - r1 - 2*r2;
a_max = x1 + r1 + 2*r2;
b_min = y1 - r1 - 2*r2;
b_max = y1 + r1 + 2*r2;
% Monte Carlo algorithm
n = 0;
for i = 1:N
rand_x = unifrnd(a_min, a_max);
rand_y = unifrnd(b_min, b_max);
if sqrt((rand_x - x1)^2 + (rand_y - y1)^2) < r1 && sqrt((rand_x - x2)^2 + (rand_y - y2)^2) < r2
% is a point in the both of circles
n = n + 1;
plot(rand_x,rand_y, 'go-');
hold on;
else
plot(rand_x,rand_y, 'ko-');
hold on;
end
end
P = (a_max - a_min) * (b_max - b_min) * n / N;
end
Call it like: circles_intersection_area([-0.4,0,1], [0.4,0,1], 10000) where the first param is the first circle (x,y,r) and the second param is the second circle.
Without using For loop.
n = 100000;
data = rand(2,n);
data = data*2*30 - 30;
x = data(1,:);
y = data(2,:);
plot(x,y,'ro');
inside5 = find(x.^2 + y.^2 <=25);
hold on
plot (x(inside5),y(inside5),'bo');
hold on
inside12 = find(x.^2 + (y-12).^2<=144);
plot (x(inside12),y(inside12),'g');
hold on
insidefinal1 = find(x.^2 + y.^2 <=25 & x.^2 + (y-12).^2>=144);
insidefinal2 = find(x.^2 + y.^2 >=25 & x.^2 + (y-12).^2<=144);
% plot(x(insidefinal1),y(insidefinal1),'bo');
hold on
% plot(x(insidefinal2),y(insidefinal2),'ro');
insidefinal3 = find(x.^2 + y.^2 <=25 & x.^2 + (y-12).^2<=144);
% plot(x(insidefinal3),y(insidefinal3),'ro');
area1=(60^2)*(length(insidefinal1)/n);
area3=(60^2)*(length(insidefinal2)/n);
area2= (60^2)*(length(insidefinal3)/n);
I am working to solve Poisson's equation (in 2d axisymmetric cylindrical coordinates) using the Jacobi method. The L2 norm decreases from ~1E3 on the first iteration (I have a really bad guess) to ~0.2 very slowly. Then, the L2 norm begins to increase over many iterations.
My geometry is parallel plates with sharp points at r = 0 on both plates. (If that matters).
Is there some error in my code? Do I need to go to a different algorithm? (I have a not yet working DADI algorithm.)
Here is my Jacobi method algorithm. Then this just in wrapped in a while loop.
subroutine Jacobi(PoissonRHS, V, resid)
implicit none
real, dimension(0:,0:) :: PoissonRHS, V
REAL resid
integer :: i,j, lb, ub
real, dimension(0:size(V,1)-1, 0:size(V,2)-1) :: oldV
real :: dr = delta(1)
real :: dz = delta(2)
real :: dr2 = (delta(1))**(-2)
real :: dz2 = (delta(2))**(-2)
integer :: M = cells(1)
integer :: N = cells(2)
oldV = V
!Note: All of the equations are second order accurate
!If at r = 0 and in the computational domain
! This is the smoothness condition, dV(r=0)/dr = 0
V(0,:) = (4.0*oldV(1,:)-oldV(2,:))/3.0
!If at r = rMax and in the computational domain
! This is an approximation and should be fixed to improve accuracy, it should be
! lim r->inf V' = 0, while this is V'(r = R) = 0
V(M, 1:N-1) = 0.5 / (dr2 + dz2) * ( &
(2.0*dr2)*oldV(M-1,1:N-1) + &
dz2 * (oldV(M,2:N) + oldV(M,0:N-2)) &
- PoissonRHS(M,1:N-1))
do i = 1, M-1
lb = max(0, nint(lowerBoundary(i * dr) / dz)) + 1
ub = min(N, nint(upperBoundary(i * dr) / dz)) - 1
V(i,lb:ub) = 0.5 / (dr2 + dz2) * ( &
((1.0 - 0.5/dble(i))*dr2)*oldV(i-1,lb:ub) + &
((1.0 + 0.5/dble(i))*dr2)*oldV(i+1,lb:ub) + &
dz2 * (oldV(i,lb+1:ub+1) + oldV(i,lb-1:ub-1)) &
- PoissonRHS(i,lb:ub))
V(i, 0:lb-1) = V0
V(i, ub+1:N) = VL
enddo
!compare to old V values to check for convergence
resid = sqrt(sum((oldV-V)**2))
return
end subroutine Jacobi
Based on additional readings it seems like it was a precision problem. Because (for example), I had the expression
V(i,lb:ub) = 0.5 / (dr2 + dz2) * ( &
((1.0 - 0.5/dble(i))*dr2)*oldV(i-1,lb:ub) + &
((1.0 + 0.5/dble(i))*dr2)*oldV(i+1,lb:ub) + &
dz2 * (oldV(i,lb+1:ub+1) + oldV(i,lb-1:ub-1)) &
- PoissonRHS(i,lb:ub))
where dr2 and dz2 are very large. So by distributing these I got terms that were ~1 and the code converges (slowly, but that's a function of the mathematics).
So my new code is
subroutine Preconditioned_Jacobi(PoissonRHS, V, resid)
implicit none
real, dimension(0:,0:) :: PoissonRHS, V
REAL resid
integer :: i,j, lb, ub
real, dimension(0:size(V,1)-1, 0:size(V,2)-1) :: oldV
real :: dr = delta(1)
real :: dz = delta(2)
real :: dr2 = (delta(1))**(-2)
real :: dz2 = (delta(2))**(-2)
real :: b,c,d
integer :: M = cells(1)
integer :: N = cells(2)
b = 0.5*(dr**2)/((dr**2) + (dz**2))
c = 0.5*(dz**2)/((dr**2) + (dz**2))
d = -0.5 / (dr2 + dz2)
oldV = V
!Note: All of the equations are second order accurate
!If at r = 0 and in the computational domain
! This is the smoothness condition, dV(r=0)/dr = 0
V(0,:) = (4.0*oldV(1,:)-oldV(2,:))/3.0 !same as: oldV(0,:) - 2.0/3.0 * (1.5 * oldV(0,:) - 2.0 * oldV(1,:) + 0.5 * oldV(2,:) - 0)
!If at r = rMax and in the computational domain
! This is an approximation and should be fixed to improve accuracy, it should be
! lim r->inf V' = 0, while this is V'(r = R) = 0
V(M,1:N-1) = d*PoissonRHS(M,1:N-1) &
+ 2.0*c * oldV(M-1,1:N-1) &
+ b * ( oldV(M,0:N) + oldV(M,2:N) )
do i = 1, M-1
lb = max(0, nint(lowerBoundary(i * dr) / dz)) + 1
ub = min(N, nint(upperBoundary(i * dr) / dz)) - 1
V(i,lb:ub) = d*PoissonRHS(i,lb:ub) &
+ (c * (1.0-0.5/dble(i)) * oldV(i-1,lb:ub)) &
+ (c * (1.0+0.5/dble(i)) * oldV(i+1,lb:ub)) &
+ b * (oldV(i,lb-1:ub-1) + oldV(i,lb+1:ub+1))
V(i, 0:lb-1) = V0
V(i, ub+1:N) = VL
enddo
!compare to old V values to check for convergence
resid = sum(abs(oldV-V))
return
end subroutine Preconditioned_Jacobi
I have a function in MATLAB which performs the Gram-Schmidt Orthogonalisation with a very important weighting applied to the inner-products (I don't think MATLAB's built in function supports this).
This function works well as far as I can tell, however, it is too slow on large matrices.
What would be the best way to improve this?
I have tried converting to a MEX file but I lose parallelisation with the compiler I'm using and so it is then slower.
I was thinking of running it on a GPU as the element-wise multiplications are highly parallelised. (But I'd prefer the implementation to be easily portable)
Can anyone vectorise this code or make it faster? I am not sure how to do it elegantly ...
I know the stackoverflow minds here are amazing, consider this a challenge :)
Function
function [Q, R] = Gram_Schmidt(A, w)
[m, n] = size(A);
Q = complex(zeros(m, n));
R = complex(zeros(n, n));
v = zeros(n, 1);
for j = 1:n
v = A(:,j);
for i = 1:j-1
R(i,j) = sum( v .* conj( Q(:,i) ) .* w ) / ...
sum( Q(:,i) .* conj( Q(:,i) ) .* w );
v = v - R(i,j) * Q(:,i);
end
R(j,j) = norm(v);
Q(:,j) = v / R(j,j);
end
end
where A is an m x n matrix of complex numbers and w is an m x 1 vector of real numbers.
Bottle-neck
This is the expression for R(i,j) which is the slowest part of the function (not 100% sure if the notation is correct):
where w is a non-negative weight function.
The weighted inner-product is mentioned on several Wikipedia pages, this is one on the weight function and this is one on orthogonal functions.
Reproduction
You can produce results using the following script:
A = complex( rand(360000,100), rand(360000,100));
w = rand(360000, 1);
[Q, R] = Gram_Schmidt(A, w);
where A and w are the inputs.
Speed and Computation
If you use the above script you will get profiler results synonymous to the following:
Testing Result
You can test the results by comparing a function with the one above using the following script:
A = complex( rand( 100, 10), rand( 100, 10));
w = rand( 100, 1);
[Q , R ] = Gram_Schmidt( A, w);
[Q2, R2] = Gram_Schmidt2( A, w);
zeros1 = norm( Q - Q2 );
zeros2 = norm( R - R2 );
where Gram_Schmidt is the function described earlier and Gram_Schmidt2 is an alternative function. The results zeros1 and zeros2 should then be very close to zero.
Note:
I tried speeding up the calculation of R(i,j) with the following but to no avail ...
R(i,j) = ( w' * ( v .* conj( Q(:,i) ) ) ) / ...
( w' * ( Q(:,i) .* conj( Q(:,i) ) ) );
1)
My first attempt at vectorization:
function [Q, R] = Gram_Schmidt1(A, w)
[m, n] = size(A);
Q = complex(zeros(m, n));
R = complex(zeros(n, n));
for j = 1:n
v = A(:,j);
QQ = Q(:,1:j-1);
QQ = bsxfun(#rdivide, bsxfun(#times, w, conj(QQ)), w.' * abs(QQ).^2);
for i = 1:j-1
R(i,j) = (v.' * QQ(:,i));
v = v - R(i,j) * Q(:,i);
end
R(j,j) = norm(v);
Q(:,j) = v / R(j,j);
end
end
Unfortunately, it turned out to be slower than the original function.
2)
Then I realized that the columns of this intermediate matrix QQ are built incrementally, and that previous ones are not modified. So here is my second attempt:
function [Q, R] = Gram_Schmidt2(A, w)
[m, n] = size(A);
Q = complex(zeros(m, n));
R = complex(zeros(n, n));
QQ = complex(zeros(m, n-1));
for j = 1:n
if j>1
qj = Q(:,j-1);
QQ(:,j-1) = (conj(qj) .* w) ./ (w.' * (qj.*conj(qj)));
end
v = A(:,j);
for i = 1:j-1
R(i,j) = (v.' * QQ(:,i));
v = v - R(i,j) * Q(:,i);
end
R(j,j) = norm(v);
Q(:,j) = v / R(j,j);
end
end
Technically no major vectorization was done; I've only precomputed intermediate results, and moved the computation outside the inner loop.
Based on a quick benchmark, this new version is definitely faster:
% some random data
>> M = 10000; N = 100;
>> A = complex(rand(M,N), rand(M,N));
>> w = rand(M,1);
% time
>> timeit(#() Gram_Schmidt(A,w), 2) % original version
ans =
1.2444
>> timeit(#() Gram_Schmidt1(A,w), 2) % first attempt (vectorized)
ans =
2.0990
>> timeit(#() Gram_Schmidt2(A,w), 2) % final version
ans =
0.4698
% check results
>> [Q,R] = Gram_Schmidt(A,w);
>> [Q2,R2] = Gram_Schmidt2(A,w);
>> norm(Q-Q2)
ans =
4.2796e-14
>> norm(R-R2)
ans =
1.7782e-12
EDIT:
Following the comments, we can rewrite the second solution to get rid of the if-statmenet, by moving that part to the end of the outer loop (i.e immediately after computing the new column Q(:,j), we compute and store the corresponding QQ(:,j)).
The function is identical in output, and timing is not that different either; the code is just a bit shorter!
function [Q, R] = Gram_Schmidt3(A, w)
[m, n] = size(A);
Q = zeros(m, n, 'like',A);
R = zeros(n, n, 'like',A);
QQ = zeros(m, n, 'like',A);
for j = 1:n
v = A(:,j);
for i = 1:j-1
R(i,j) = (v.' * QQ(:,i));
v = v - R(i,j) * Q(:,i);
end
R(j,j) = norm(v);
Q(:,j) = v / R(j,j);
QQ(:,j) = (conj(Q(:,j)) .* w) ./ (w.' * (Q(:,j).*conj(Q(:,j))));
end
end
Note that I used the zeros(..., 'like',A) syntax (new in recent MATLAB versions). This allows us to run the function unmodified on the GPU (assuming you have the Parallel Computing Toolbox):
% CPU
[Q3,R3] = Gram_Schmidt3(A, w);
vs.
% GPU
AA = gpuArray(A);
[Q3,R3] = Gram_Schmidt3(AA, w);
Unfortunately in my case, it wasn't any faster. In fact it was many times slower to run on the GPU than on the CPU, but it was worth a shot :)
There is a long discussion here, but, to jump to the answer. You have weighted the numerator and denominator of the R calculation by a vector w. The weighting occurs on the inner loop, and consist of a triple dot product, A dot Q dot w in the numerator, and Q dot Q dot w in the denominator. If you make one change, I think the code will run significantly faster. Write num = (A dot sqrt(w)) dot (Q dot sqrt(w)) and write den = (Q dot sqrt(w)) dot (Q dot sqrt(w)). That moves the (A dot sqrt(w)) and (Q dot sqrt(w)) product calculations out of the inner loop.
I would like to give an description of the formulation to Gram Schmidt Orthogonalization, that, hopefully, in addition to giving an alternate computational solution, gives further insight into the advantage of GSO.
The "goals" of GSO are two fold. First, to enable the solution of an equation like Ax=y, where A has far more rows than columns. This situation occurs frequently when measuring data, in that it is easy to measure more data than the number of states. The approach to the first goal is to rewrite A as QR such that the columns of Q are orthogonal and normalized, and R is a triangular matrix. The algorithm you provided, I believe, achieves the first goal. The Q represents the basis space of the A matrix, and R represents the amplitude of each basis space required to generate each column of A.
The second goal of GSO is to rank the basis vectors in order of significance. This the step that you have not done. And, while including this step, may increase the solution time, the results will identify which elements of x are important, according the data contained in the measurements represented by A.
But, I think, with this implementation, the solution is faster than the approach you presented.
Aij = Qij Rij where Qj are orthonormal and Rij is upper triangular, Ri,j>i=0. Qj are the orthogonal basis vectors for A, and Rij is the participation of each Qj to create a column in A. So,
A_j1 = Q_j1 * R_1,1
A_j2 = Q_j1 * R_1,2 + Q_j2 * R_2,2
A_j3 = Q_j1 * R_1,3 + Q_j2 * R_2,3 + Q_j3 * R_3,3
By inspection, you can write
A_j1 = ( A_j1 / | A_j1 | ) * | A_j1 | = Q_j1 * R_1,1
Then you project Q_j1 onto from every other column A to get the R_1,j elements
R_1,2 = Q_j1 dot Aj2
R_1,3 = Q_j1 dot Aj3
...
R_1,j(j>1) = A_j dot Q_j1
Then you subtract the elements of project of Q_j1 from the columns of A (this would set the first column to zero, so you can ignore the first column
for j = 2,n
A_j = A_j - R_1,j * Q_j1
end
Now one column from A has been removed, the first orthonormal basis vector, Q,j1, was determined, and the contribution of the first basis vector to each column, R_1,j has been determined, and the contribution of the first basis vector has been subtracted from each column. Repeat this process for the remaining columns of A to obtain the remaining columns of Q and rows of R.
for i = 1,n
R_ii = |A_i| A_i is the ith column of A, |A_i| is magnitude of A_i
Q_i = A_i / R_ii Q_i is the ith column of Q
for j = i, n
R_ij = | A_j dot Q_i |
A_j = A_j - R_ij * Q_i
end
end
You are trying to weight the rows of A, with w. Here is one approach. I would normalize w, and incorporate the effect into R. You "removed" the effects of w by multiply and dividing by w. An alternative to "removing" the effect is to normalize the amplitude of w to one.
w = w / | w |
for i = 1,n
R_ii = |A_i inner product w| # A_i inner product w = A_i .* w
Q_i = A_i / R_ii
for j = i, n
R_ij = | (A_i inner product w) dot Q_i | # A dot B = A' * B
A_j = A_j - R_ij * Q_i
end
end
Another approach to implementing w is to normalize w and then premultiply every column of A by w. That cleanly weights the rows of A, and reduces the number of multiplications.
Using the following may help in speeding up your code
A inner product B = A .* B
A dot w = A' w
(A B)' = B'A'
A' conj(A) = |A|^2
The above can be vectorized easily in matlab, pretty much as written.
But, you are missing the second portion of ranking of A, which tells you which states (elements of x in A x = y) are significantly represented in the data
The ranking procedure is easy to describe, but I'll let you work out the programming details. The above procedure essentially assumes the columns of A are in order of significance, and the first column is subtracted off all the remaining columns, then the 2nd column is subtracted off the remaining columns, etc. The first row of R represents the contribution of the first column of Q to each column of A. If you sum the absolute value of the first row of R contributions, you get a measurement of the contribution of the first column of Q to the matrix A. So, you just evaluate each column of A as the first (or next) column of Q, and determine the ranking score of the contribution of that Q column to the remaining columns of A. Then select the A column that has the highest rank as the next Q column. Coding this essentially comes down to pre estimating the next row of R, for every remaining column in A, in order to determine which ranked R magnitude has the largest amplitude. Having a index vector that represents the original column order of A will be beneficial. By ranking the basis vectors, you end up with the "principal" basis vectors that represent A, which is typically much smaller in number than the number of columns in A.
Also, if you rank the columns, it is not necessary to calculate every column of R. When you know which columns of A don't contain any useful information, there's no real benefit to keeping those columns.
In structural dynamics, one approach to reducing the number of degrees of freedom is to calculate the eigenvalues, assuming you have representative values for the mass and stiffness matrix. If you think about it, the above approach can be used to "calculate" the M and K (and C) matrices from measured response, and also identify the "measurement response shapes" that are significantly represented in the data. These are diffenrent, and potentially more important, than the mode shapes. So, you can solve very difficult problems, i.e., estimation of state matrices and number of degrees of freedom represented, from measured response, by the above approach. If you read up on N4SID, he did something similar, except he used SVD instead of GSO. I don't like the technical description for N4SID, too much focus on vector projection notation, which is simply a dot product.
There may be one or two errors in the above information, I'm writing this off the top of my head, before rushing off to work. So, check the algorithm / equations as you implement... Good Luck
Coming back to your question, of how to optimize the algorithm when you weight with w. Here is a basic GSO algorithm, without the sorting, written compatible with your function.
Note, the code below is in octave, not matlab. There are some minor differences.
function [Q, R] = Gram_Schmidt_2(A, w)
[m, n] = size(A);
Q = complex(zeros(m, n));
R = complex(zeros(n, n));
# Outer loop identifies the basis vectors
for j = 1:n
aCol = A(:,j);
# Subtract off the basis vector
for i = 1:(j-1)
R(i,j) = ctranspose(Q(:,j)) * aCol;
aCol = aCol - R(i,j) * Q(:,j);
end
amp_A_col = norm(aCol);
R(j,j) = amp_A_col;
Q(:,j) = aCol / amp_A_col;
end
end
To get your algorithm, only change one line. But, you lose a lot of speed because "ctranspose(Q(:,j)) * aCol" is a vector operation but "sum( aCol .* conj( Q(:,i) ) .* w )" is a row operation.
function [Q, R] = Gram_Schmidt_2(A, w)
[m, n] = size(A);
Q = complex(zeros(m, n));
R = complex(zeros(n, n));
# Outer loop identifies the basis vectors
for j = 1:n
aCol = A(:,j);
# Subtract off the basis vector
for i = 1:(j-1)
# R(i,j) = ctranspose(Q(:,j)) * aCol;
R(i,j) = sum( aCol .* conj( Q(:,i) ) .* w ) / ...
sum( Q(:,i) .* conj( Q(:,i) ) .* w );
aCol = aCol - R(i,j) * Q(:,j);
end
amp_A_col = norm(aCol);
R(j,j) = amp_A_col;
Q(:,j) = aCol / amp_A_col;
end
end
You can change it back to a vector operation by weighting aCol and Q by the sqrt of w.
function [Q, R] = Gram_Schmidt_3(A, w)
[m, n] = size(A);
Q = complex(zeros(m, n));
R = complex(zeros(n, n));
Q_sw = complex(zeros(m, n));
sw = w .^ 0.5;
for j = 1:n
aCol = A(:,j);
aCol_sw = aCol .* sw;
# Subtract off the basis vector
for i = 1:(j-1)
# R(i,j) = ctranspose(Q(:,i)) * aCol;
numTerm = ctranspose( Q_sw(:,i) ) * aCol_sw;
denTerm = ctranspose( Q_sw(:,i) ) * Q_sw(:,i);
R(i,j) = numTerm / denTerm;
aCol_sw = aCol_sw - R(i,j) * Q_sw(:,i);
end
aCol = aCol_sw ./ sw;
amp_A_col = norm(aCol);
R(j,j) = amp_A_col;
Q(:,j) = aCol / amp_A_col;
Q_sw(:,j) = Q(:,j) .* sw;
end
end
As pointed out by JacobD, the above function does not run faster. Possibly it takes time to create the additional arrays. Another grouping strategy for the triple product is to group w with conj(Q). Hope this is faster...
function [Q, R] = Gram_Schmidt_4(A, w)
[m, n] = size(A);
Q = complex(zeros(m, n));
R = complex(zeros(n, n));
for j = 1:n
aCol = A(:,j);
for i = 1:(j-1)
cqw = conj(Q(:,i)) .* w;
R(i,j) = ( transpose( aCol ) * cqw) ...
/ (transpose( Q(:,i) ) * cqw);
aCol = aCol - R(i,j) * Q(:,i);
end
amp_A_col = norm(aCol);
R(j,j) = amp_A_col;
Q(:,j) = aCol / amp_A_col;
end
end
Here's a driver function to time different versions.
function Gram_Schmidt_tester_2
nSamples = 360000;
nMeas = 100;
nMeas = 15;
A = complex( rand(nSamples,nMeas), rand(nSamples,nMeas));
w = rand(nSamples, 1);
profile on;
[Q1, R1] = Gram_Schmidt_basic(A);
profile off;
data1 = profile ("info");
tData1=data1.FunctionTable(1).TotalTime;
approx_zero1 = A - Q1 * R1;
max_value1 = max(max(abs(approx_zero1)));
profile on;
[Q2, R2] = Gram_Schmidt_w_Orig(A, w);
profile off;
data2 = profile ("info");
tData2=data2.FunctionTable(1).TotalTime;
approx_zero2 = A - Q2 * R2;
max_value2 = max(max(abs(approx_zero2)));
sw=w.^0.5;
profile on;
[Q3, R3] = Gram_Schmidt_sqrt_w(A, w);
profile off;
data3 = profile ("info");
tData3=data3.FunctionTable(1).TotalTime;
approx_zero3 = A - Q3 * R3;
max_value3 = max(max(abs(approx_zero3)));
profile on;
[Q4, R4] = Gram_Schmidt_4(A, w);
profile off;
data4 = profile ("info");
tData4=data4.FunctionTable(1).TotalTime;
approx_zero4 = A - Q4 * R4;
max_value4 = max(max(abs(approx_zero4)));
profile on;
[Q5, R5] = Gram_Schmidt_5(A, w);
profile off;
data5 = profile ("info");
tData5=data5.FunctionTable(1).TotalTime;
approx_zero5 = A - Q5 * R5;
max_value5 = max(max(abs(approx_zero5)));
profile on;
[Q2a, R2a] = Gram_Schmidt2a(A, w);
profile off;
data2a = profile ("info");
tData2a=data2a.FunctionTable(1).TotalTime;
approx_zero2a = A - Q2a * R2a;
max_value2a = max(max(abs(approx_zero2a)));
profshow (data1, 6);
profshow (data2, 6);
profshow (data3, 6);
profshow (data4, 6);
profshow (data5, 6);
profshow (data2a, 6);
sprintf('Time for %s is %5.3f sec with %d samples and %d meas, max value is %g',
data1.FunctionTable(1).FunctionName,
data1.FunctionTable(1).TotalTime,
nSamples, nMeas, max_value1)
sprintf('Time for %s is %5.3f sec with %d samples and %d meas, max value is %g',
data2.FunctionTable(1).FunctionName,
data2.FunctionTable(1).TotalTime,
nSamples, nMeas, max_value2)
sprintf('Time for %s is %5.3f sec with %d samples and %d meas, max value is %g',
data3.FunctionTable(1).FunctionName,
data3.FunctionTable(1).TotalTime,
nSamples, nMeas, max_value3)
sprintf('Time for %s is %5.3f sec with %d samples and %d meas, max value is %g',
data4.FunctionTable(1).FunctionName,
data4.FunctionTable(1).TotalTime,
nSamples, nMeas, max_value4)
sprintf('Time for %s is %5.3f sec with %d samples and %d meas, max value is %g',
data5.FunctionTable(1).FunctionName,
data5.FunctionTable(1).TotalTime,
nSamples, nMeas, max_value5)
sprintf('Time for %s is %5.3f sec with %d samples and %d meas, max value is %g',
data2a.FunctionTable(1).FunctionName,
data2a.FunctionTable(1).TotalTime,
nSamples, nMeas, max_value2a)
end
On my old home laptop, in Octave, the results are
ans = Time for Gram_Schmidt_basic is 0.889 sec with 360000 samples and 15 meas, max value is 1.57009e-16
ans = Time for Gram_Schmidt_w_Orig is 0.952 sec with 360000 samples and 15 meas, max value is 6.36717e-16
ans = Time for Gram_Schmidt_sqrt_w is 0.390 sec with 360000 samples and 15 meas, max value is 6.47366e-16
ans = Time for Gram_Schmidt_4 is 0.452 sec with 360000 samples and 15 meas, max value is 6.47366e-16
ans = Time for Gram_Schmidt_5 is 2.636 sec with 360000 samples and 15 meas, max value is 6.47366e-16
ans = Time for Gram_Schmidt2a is 0.905 sec with 360000 samples and 15 meas, max value is 6.68443e-16
These results indicate the fastest algorithm is the sqrt_w algorithm above at 0.39 sec, followed by the grouping of conj(Q) with w (above) at 0.452 sec, then version 2 of Amro solution at 0.905 sec, then the original algorithm in the question at 0.952, then a version 5 which interchanges rows / columns to see if row storage presented (code not included) at 2.636 sec. These results indicate the sqrt(w) split between A and Q is the fastest solution. But these results are not consistent with JacobD's comment about sqrt(w) not being faster.
It is possible to vectorize this so only one loop is necessary. The important fundamental change from the original algorithm is that if you swap the inner and outer loops you can vectorize the projection of the reference vector to all remaining vectors. Working off #Amro's solution, I found that an inner loop is actually faster than the matrix subtraction. I do not understand why this would be. Timing this against #Amro's solution, it is about 45% faster.
function [Q, R] = Gram_Schmidt5(A, w)
Q = A;
n_dimensions = size(A, 2);
R = zeros(n_dimensions);
R(1, 1) = norm(Q(:, 1));
Q(:, 1) = Q(:, 1) ./ R(1, 1);
for i = 2 : n_dimensions
Qw = (Q(:, i - 1) .* w)' * Q(:, (i - 1) : end);
R(i - 1, i : end) = Qw(2:end) / Qw(1);
%% Surprisingly this loop beats the matrix multiply
for j = i : n_dimensions
Q(:, j) = Q(:, j) - Q(:, i - 1) * R(i - 1, j);
end
%% This multiply is slower than above
% Q(:, i : end) = ...
% Q(:, i : end) - ...
% Q(:, i - 1) * R(i - 1, i : end);
R(i, i) = norm(Q(:,i));
Q(:, i) = Q(:, i) ./ R(i, i);
end