I have two images A and B, each of mxm size. I want to multiply these images such that C=AxB.
So far I've found the immultiply function in MATLAB, but this function multiplies the corresponding bits of the images rather than performing matrix multiplication.
I have also tried A.*B but this also gives multiplication of corresponding bits. When I try A*B I get this message:
??? Error using ==> mtimes
Integer data types are not fully supported for this operation.
At least one operand must be a scalar.
You need to convert the images into doubles before multiplying them.
Example:
multiplied = double(firstMat) * double(secondMat);
Related
I have a tensor A (M x N x C) and mask(M x N) for tensor A.
Due to memory issue for my transformer network, I want to make a small tensor by taking only the part defined by the mask from tensor A.
For example, Figure 1 is my tensor A. I paint gray for masked query-key pair.
Figure 1. example for tensor A
I don't need gray colored value for further calculation. So I want to make smaller tensor including all required value.
From Figure 1 tensor, I hope to make tensor like Figure 2. In Figure 2, gray colored value is just dummy value, and whether the index of corresponding value is a dummy value can be known through the mask.(Figure 3)
Figure 2. smaller tensor
Figure 3. Mask indicating index of dummy value filled
How can I do this with efficient torch operation?
I think you are looking for sparse tensors. Sparse representation does not give you a "packed" matrix with "dummy" values, but rather a different way of storing only those entires you care about.
Pytorch also support some operations on sparse matrices.
I am trying to find conditions then a certain matrix is invertible or not (which is problematic as the matrix is random). The matrix results from the following:
$A=\Tilde{A}+diag(n)$.
Furthermore $\Tilde{A}$ results from the pointwise multiplication of a random symmetric matrix (consisting of 0 and 1, but necessarily 0 on the diagonal) with a random vector which constsis of $\alpha$ and $\beta$ entries.
Does anyone have any ideas how to deduce some criterions for the invertibility of matrix $A$?
Thank you so much!
I already tried thinking about LU decomoposition, but could not deduce any criterion. Obviosly, it fully depends on how the random matrices look and linear dependence between the rows are less likely if the dimension is higher...
I have two sparse matrices "Matrix1" and "Matrix2" of the same size p x n.
By sparse matrix I mean that it contains a lot of exactly zero elements.
I want to show the two matrices under the same colormap and a unique colorbar. Doing this in MATLAB is straightforward:
bottom = min(min(min(Matrix1)),min(min(Matrix2)));
top = max(max(max(Matrix1)),max(max(Matrix2)));
subplot(1,2,1)
imagesc(Matrix1)
colormap(gray)
caxis manual
caxis([bottom top]);
subplot(1,2,2)
imagesc(Matrix2)
colormap(gray)
caxis manual
caxis([bottom top]);
colorbar;
My problem:
In fact, when I show the matrix using imagesc(Matrix), it can ignore the noises (or backgrounds) that always appear with using imagesc(10*log10(Matrix)).
That is why, I want to show the 10*log10 of the matrices. But in this case, the minimum value will be -Inf since the matrices are sparse. In this case caxis will give an error because bottom is equal to -Inf.
What do you suggest me? How can I modify the above code?
Any help will be very appreciated!
A very important point is that the minimum value in your matrix will always be 0. Leveraging this, a very simple way to address your problem is to add 1 inside the log operation so that values that map to 0 in the original matrix also map to 0 in the log operation. This avoids the -Inf error that you're encountering. In fact, this is a very common way of visualizing the Fourier Transform if you will. Adding 1 to the logarithm ensures that the transform has no negative values in the output, yet the derivative or its rate of change remains intact as the effect is simply a translation of the curve by 1 unit to the left.
Therefore, simply do imagesc(10*log10(1 + Matrix));, then the minimum is always bounded at 0 while the maximum is unbounded but subject to the largest value that is seen in Matrix.
I am trying to multiply two matrices in lua whose dimensions are a=40,000x1 and b=1x40,000. In Lua, the 40,000x1 matrix is showing up as a 1D tensor and the 1x40,000 matrix is showing up as a 2D tensor. Whenever, I try to multiply them simply using a*b, an error is showing up: multiplication between 1D and 2D tensors not yet supported. I cannot iteratively go through each index because this function is used regularly in my program and would considerably increase time of execution. How can I multiply a and b?
Use view:
c = a:view(40000, 1) * b
I have a 20x20 matrix filled with random numbers. I need to find what matrix will multiply with the random one in order to return a 20x1 matrix of all ones.
What I've tried:
inv(A) (where A is a 20x20 matrix filled with random numbers) I know I don't want the inverse of the matrix because, if successful, it would only return the identity matrix, which is not what I need.
I suggest you use matrix algebra to express the problem and derive the solution. Consider the following, where * means matrix multiplication and 1 means the vector of all ones, and Ainv is the inverse matrix for A.
A*x=1
Ainv * A * x = Ainv * 1
x = Ainv * 1
[EDIT 7 MAR 2016]
In many computer algebra systems (MATLAB, scipy, etc.), there is a function called solve (or similar) which can be used to solve linear systems expressed as Ax=b. In particular, for MATLAB, see: linsolve. Also, for MATLAB, see the backslash operator.
I'm a python user, so I use numpy.linalg.solve(), which does the same thing (see this link).