Matrices are not aligned - matrix

I am trying to calculate the standard deviation of a portfolio, however, I get a message that the matrices are not aligned.
portfolio_std(opw_mv,cov_mat1)
ValueError: shapes (49, 1) and (49, 49) not aligned
Can anyone help?
I tried to transpose the weights

Related

How do I calculate the mean shared error between two 3D tensors in matrix form?

I know that the mse between two 2d matrices, A and B, of shapes pxq, can be calculated in matrix terms as follows:
1/n tr(AtB)
The nice thing about this equation is that the matrices At and B are conformal to matrix multiplication and also yields a square matrix which has a defined trace.
But if we have two 3d tensors A and B of shapes pxqxr, then I don't understand how to get the outer product between them to get a square matrix so that the mse camne written in terms of trace.

Dot product of a square rotation matrix to each element of another coordinate matrix

I have a 3D matrix containing Cartesian coordinate points. In the example shown below a scaled down version of such a matrix is given (a) with dimensions 10x10x10x3. Then I have a rotation matrix R of size 3x3 and I want to make a dot product of this to each 3x1 position vector given in the matrix a, for example a[0,0,0] is the position vector for top left most coordinate point.
A simple way to do this is by using a for loop -
import numpy as np
a = np.ones([10, 10, 10, 3])
R=np.ones([3,3])
a=np.reshape(a,(1000,3))
b=np.array([np.dot(R,xyz) for xyz in a])
b=np.reshape(b, (10,10,10,3))
But this is way too slow when the matrix a becomes large. Is there a way to do a matrix multiplication type thing to speed up the computation?
I figured out a solution to my problem which speeds up the computation to more than 10 folds. But it is not a fancy matrix multiplication solution, so something that can make it even faster is always appreciated.
My current solution is the following:
X,Y,Z=a[:,:,:,0],a[:,:,:,1],a[:,:,:,2]
R00X=R[0][0]*X
R10X=R[1][0]*X
R20X=R[2][0]*X
R01Y=R[0][1]*Y
R11Y=R[1][1]*Y
R21Y=R[2][1]*Y
R02Z=R[0][2]*Z
R12Z=R[1][2]*Z
R22Z=R[2][2]*Z
#Rotated Field
Xr=R00X+R01Y+R02Z
Yr=R10X+R11Y+R12Z
Zr=R20X+R21Y+R22Z
b=np.array([Xr,Yr,Zr]).T

Keras - Mean Squared Error (MSE) calculation definition for images?

I am using
loss = 'mse'
in Keras for an autoencoder model that reconstructs greyscale images. My batch size is 1. A single loss value is being produced during training.
I can't seem to find anywhere an answer to this question. How does Keras calculate this MSE loss value for these 2 images? They're represented as 2d NumPy arrays. Does it compute the squared difference between each pixel and then divide by the number of pixels (considering the batch size is 1)?
Is this process the same if the input is more than 1 greyscale image into the model; computing the squared pixel difference across all the images, then dividing by the total number of pixels in all the images?
Many thanks
def mse(y_true, y_pred):
return K.mean(K.square(y_pred - y_true), axis=-1)
This is the code for the mse, the operations (difference and square) are bitwise (pixel by pixel), then it computes the mean, so it divides for the number of values (pixel).

Matlab - Image Formation - Matrix

I am doing a very interesting Computer Vision project which talks about how to "create manually" images with Matlab.
The teacher gave me three matrices: the illuminant matrix (called E), the camera sensitivity matrix (called R) and finally, the surface reflectance matrix (called S).
The matrix dimensions are as follows:
S: 31x512x512 (reflectance samples x x-dimension x y-dimension)
R: 31x3
E: 31x1
The teacher gave me also the following relationship:
P=transpose(C)*R=transpose(S)*diagonal(E)*R
Where C is the color matrix.
Where P is the sensor response matrix.
The goal is to display the image formed by all the previous matrices. Therefore, we have to compute the P matrix.
The class of all the matrices is double.
This is what I have done:
Diag_D=diag(D);% Diagonal matrix of D
S_reshaped= reshape(S,31,[512*512]);% Reshape the surface reflectance matrix
S_permute=permute(S_reshaped,[2 1]);% The output matrix is a 262144x31 matrix
Color_Signal_D65_buffer=S_permute*Diag_DD;
Color_Signal_D65=reshape(Color_Signal_D65_buffer,[512 512 31]);% This is the final color matrix
Image_D65_buffer= (reshape(Color_Signal_D65,[512*512],31))*R;% Apply the given formula
Image_D65= reshape(Image_D65_buffer,[512 512 3]);% image formation
Image_D65_norm=sqrt(sum(Image_D65.^2,3));% Compute the Image_D65 norm
Image_D65_Normalized=bsxfun(#rdivide, Image_D65, Image_D65_norm);% Divide each element of the matrix by the norm in order to normalize the matrix
figure
imshow(Image_D65_Normalized)% Display the image
However,it did not work at all. The output is an image but the colors are completely wrong (there is too much blue on the image).
I think it could be a matrix reshaping problem but I have tried all the possible combinations but nothing to do.
Thank you so much for your help
I've finaly found the error. It was a problem in the normalization process. I was using the wrong formula.

Calculate Median Image in Matlab

I am new to matlab, so forgive me if i am asking for the obvious here: what i have is a collection of color photographic images (all the same dimensions). What i want to do is calculate the median color value for each pixel.
I know there is a median filter in matlab, but as far as i know it does not do exactly what i want. Because i want to calculate the median value between the entire collection of images, for each separate pixel.
So for example, if i have three images, i want matlab to calculate (for each pixel) which colorvalue out of those three images is the median value. How would i go about doing this, does anyone know?
Edit: From what i can come up with, i would have to load all the images into a single matrix. The matrix would have to have 4 dimensions (height, width, rgb, images), and for each pixel and each color find the median in the 4th dimension (between the images).
Is that correct (and possible)? And how can i do this?
Your intuition is correct. If you have images image_1, image_2, image_3, for example, you can assign them to a 4 dimensional matrix:
X(:,:,:,1) = image_1;
X(:,:,:,2) = image_2;
X(:,:,:,3) = image_3;
Then use:
Y=median(X,4);
To get the median.
Expanding my comments into a full answer;
#prototoast's answer is elegant, but since medians for the R, G and B values of each pixel are calculated separately, the output image will look very strange.
To get a well-defined median that makes visual sense, the easiest thing to do is cast the images to black-and-white before you try to take the median.
rgb2gray() from the Image Processing toolbox will do this in a way that preserves the luminance of each pixel while discarding the hue and saturation.
EDIT:
If you want to define the "RGB median" as "the middle value in cartesian coordinates" this is easy enough to do for three images.
Consider a single pixel with three possible choices for the median colour, C1=(r1,g1,b1), C2=(r2,g2,b2), C3=(r3,g3,b3). Generally these form a triangle in 3D space.
Take the Pythagorean distance between the three colours: D1_2=abs(C2-C1), D2_3=abs(C3-C2), D1_3=abs(C3-C1).
Pick the "median" to be the colour that has lowest distance to the other two. Defining D1=D1_2+D1_3, etc. and taking min(D1,D2,D3) should work, courtesy of the Triangle Inequality. Note the degenerate cases: equilateral triangle (C1, C2, C3 equidistant), line (C1, C2, C3 linear with each other), or point (C1=C2=C3).
Note that this simple way of thinking about a 3D median is hard to extend to more than three images, because "the median" of a set of four or more 3D points is a bit harder to define.
Edit 2
For defining the "median" of N points as the centre of the smallest sphere that encloses them in 3D space, you could try:
Find the two points N1 and N2 in {N} that are furthest apart. The distance between N1 and N2 is the diameter of the smallest sphere that encloses all the points. (Proof: Any smaller and the sphere would not be able to enclose both N1 and N2 at the same time.)
The median is then halfway between N1 and N2: M = (N1+N2)/2.
Edit 3: The above only works if no three points are equidistant. Maybe you need to ask math.stackexchange.com?
Edit 4: Wikipedia delivers again! Smallest circle problem, Bounding sphere.

Resources