sub2ind all x and y coordinates of a matrix - matrix

I am a quite newbie on matlab and i have a simple issue that is disturbing me,
I want to know if its possible to covert all the subscripts of a matrix to linear indices.
when using SUB2IND i must inform de x and y coordinates, but i want to convert all at the same time.
i can use function FIND which returns two vectors x and y and this way i can use SUB2IND succesfully, but FIND only returns the x and y coordinates of nonzero elements.
is there a smart way to do this?

If you want all of the elements of an array A as linear subscripts, this can be done simply via:
IND = 1:numel(A);
This works for any size or dimension array.
More on array indexing in Matlab, including the difference between linear indexing and logical indexing. When you use find you're essentially using logical indexing to obtain linear indexing. The find function can be used to reliably obtain all of your linear indices, via IND = find(A==A);, but this is horrendously inefficient.

you don't need to convert, just use a single number \ 1-D vector when accessing elements of your matrix. For example, given a 5x5 matrix M
M=magic(5);
you can access the last element using M(5,5) or using M(25) ...
similarly M(21:25) will give you the info of M(1,5),M(2,5),...M(5,5).

Related

Make large matrix to smaller with mask

I have a tensor A (M x N x C) and mask(M x N) for tensor A.
Due to memory issue for my transformer network, I want to make a small tensor by taking only the part defined by the mask from tensor A.
For example, Figure 1 is my tensor A. I paint gray for masked query-key pair.
Figure 1. example for tensor A
I don't need gray colored value for further calculation. So I want to make smaller tensor including all required value.
From Figure 1 tensor, I hope to make tensor like Figure 2. In Figure 2, gray colored value is just dummy value, and whether the index of corresponding value is a dummy value can be known through the mask.(Figure 3)
Figure 2. smaller tensor
Figure 3. Mask indicating index of dummy value filled
How can I do this with efficient torch operation?
I think you are looking for sparse tensors. Sparse representation does not give you a "packed" matrix with "dummy" values, but rather a different way of storing only those entires you care about.
Pytorch also support some operations on sparse matrices.

Julia - How do I add a matrix to a list of matrices

I'm new to Julia, and I am currently working on a model where I need to add a matrix to list of matrices. I am trying to accomplish this with:
push!(BranchDomainNew, BranchDomain[k])
Where BranchDomainNew is a 1x7 matrix (3D) made up of matrices. I am trying to append BranchDomain[k] (another matrix of the same dimensions) to this list. Ultimately, my goal is to have BranchDomainNew be 8 matrices long, with the last index containing BranchDomain[k].
Here's the error I keep getting:
MethodError: no method matching push!(::Matrix{Any}, ::Matrix{Bool})
I also tried using append!(), which unfortunately also did not work - I got the same error (except append! instead of push!). I'd love to know why these methods don't work for this, and how I can accomplish this goal. Also, I am working with version v"1.7.2". Thanks
You cannot push! or append! elements to a matrix, because matrices are 2-dimensional entities, and adding single elements could ruin its shape, and is therefore not allowed. You can instead concatenate rows or columns using hcat or vcat.
But it looks like what you really should use is a Vector, not a 1xN Matrix.
So make sure that BranchDomainNew is a Vector of matrices, instead of a Matrix of matrices. Then you can push! and append! all you like.
You did not show how you made your matrix, but it is possible that you did something like this:
BranchDomainNew = [mat1 mat2 mat3] # create 1x3 Matrix
when you should have done
BranchDomainNew = [mat1, mat2, mat3] # create length 3 Vector
It is a common mistake for many new Julia users to use 1xN or Nx1 matrices, when they should actually use a length-N vector. For example, they often initialize arrays as zeros(N, 1), when they should use zeros(N)
The difference is important, and in almost all cases a vector is better.

tensorflow: reduce_max function

consider the following code-
a=tf.convert_to_tensor(np.array([[1001,1002],[3,4]]), dtype=tf.float32)
b=tf.reduce_max(a,reduction_indices=[1], keep_dims=True)
with tf.Session():
print b.eval()
What exactly is the purpose of keep_dims here? I tested quite a bit, and saw that the above is equivalent to-
b=tf.reduce_max(a,reduction_indices=[1], keep_dims=False)
b=tf.expand_dims(b,1)
I maybe wrong, but my guess is that if keep_dims is False, we get a 2D coloumn vector. And if keep_dims=True, we have a 2x1 matrix. But how are they different?
If you reduce over one or more indices (i.e. dimensions of the tensor), you effectively reduce the rank of the tensor (i.e. its number of dimensions or, in other words, the number of indices you need in order to access an element of the tensor). By setting keep_dims=True, you are telling tensorflow to keep the dimensions over which you reduce. They will then have size 1, but they are still there. While a column vector and a nx1 matrix are conceptually the same thing, in tensorflow, these are tensors of rank 1 (you need a single index to access an element) and rank 2 (you need two indices to access an element), respectively.

Matrix transposition without using loops..?

How to transpose a matrix without using any kind of loops. If it's nxn we can make the diagonal as base and shift elements. But for nxm matrix I think this solution is not feasible.
Anyway, to read or store we need to use loops right...??
Any solution without loops..??
If you have known at the beginning the dimension of the matrix, then you will not need any loop. Because you just can swap the matrix position to transpose the matrix overall. In this first condition you don't need loop as well even if the dimension is m x n.
But if you don't know the dimension of matrix in the beginning, then we definitely will need loop to iterate the matrix to read some position and swap to other position in process of transposing matrix.
For storing the entire transposed matrix, you definitely need to use a loop. This is not really a big deal since storing a matrix uses loops anyway, as you need to loop through the members of the matrix to store it.
If you are just reading it, you can use the definition of a matrix transpose and just translate the indicies. For example, in C:
int getTransposedElement(int i,int j, int** originalMatrix) {
return originalMatrix[j,i];
}
If you are using a language with classes and polymorphism, you can create a new matrix class that does this automatically. This has the additional benefit that it avoids copying the original matrix, which saves memory and allows changes to the transposed matrix to be reflected in the original matrix.

Eigen3 - accessing a (non contiguous) subset of vector elements

Suppose I have a VectorXf exampleVector with arbitrary float values and I want to select out some elements according to their values.
I can efficiently get a logical vector of true/false values according to my criterion
eg boolArray=exampleVector<1;
But now I want to make a new vector (of a smaller dimension) that contains only those elements that meet my criterion.
How can I do this efficiently in eigen3?
In R I could use reducedVector=exampleVector[boolArray]
Thanks in advance
Since the VectorXf stores its values in a continous memory range, you will have to copy out the values that you want. I am sure R does it the same way, so you won't loose efficiency. There is however no way that I know of to do it as conveniently as in R. So you will have to loop through and copy out the relevant values.

Resources