I have trying to multipy a column matrix and a row matrix in mathematica. But mathematica gives row matrix as a column matrix. so multipy function doesnt work. My codes are
`Y = Inverse[S];
Print["Y=", MatrixForm[Y]];
For[i = 1, i <= n, i++,
Subscript[P, i] = MatrixForm[S[[All, i]].Y[[i]]];
Print["CarpimS=", MatrixForm[S[[All, i]]]];
Print["CarpimY=", MatrixForm[Y[[i]]]];
Print["P=", Subscript[P, i]];
];
If anyone know this situation please answer
This is a badly written question, so I'm going to have to make some guesses. Your code does not seem relevant to your question, with this exception: S[[All, i]].Y[[i]]. Given your description, I'm guessing we can say that S is k by k and so is Y. If your goal is to Dot the i-th column of S by the i-th row of its inverse Y, then what you have is fine: you produce each as a 1-d vector, and then produce a scalar product. But you say you're not getting what you want, so I'm guessing you want the outer product instead.
mS = IdentityMatrix[5];
mS[[3, 3]] = 99;
mY = Inverse[mS];
mS[[All, 3]].mY[[3]] (* scalar product *)
Outer[Times, mS[[All, 3]], mY[[3]]] (* outer product *)
If I guessed wrong, you will have to work on improving your question.
Related
Having a bit of trouble generating an NxN matrix in Mathematica. Given the value of N, I need to construct the NxN matrix that looks like the following:
N = Input["Enter value for N:"];
matrix = ConsantArray[0,{N,N}];
Do[matrix[[i,j]] = **"???"** ,{i,N}, {j,N}]
matrix // Matrix Form
Not sure in what should go as my statement in Do-Loop. Any help would appreciate it.
You could create a 1D array [1 ... n2] and then reshape or partition it to a matrix.
matrix = ArrayReshape[Range[n^2], {n, n}]
(* also works: *)
matrix = Partition[Range[n^2], n]
a couple more ways.
matrix=Table[j+(i-1) n,{i,n},{j,n}]
matrix=Array[#2+(#1-1) n &,{n,n}]
the Table form should give a clue how to fix your Do as well, but that's usually a poor approach performance-wise.
do not use capital N by the way its a reserved symbol.
I have a nX2 matrix A and a 3D matrix K. I would like to take element-wise multiplication specifying 2 indices in 3rd dimension of K designated by each row vector in A and take summation of them.
For instance of a simplified example when n=2,
A=[1 2;3 4];%2X2 matrix
K=unifrnd(0.1,0.1,2,2,4);%just random 3D matrix
L=zeros(2,2);%save result to here
for t=1:2
L=L+prod(K(:,:,A(t,:)),3);
end
Can I get rid of the for loop in this case?
How's this?
B = A.'; %'
L = squeeze(sum(prod(...
reshape(permute(K(:,:,B(:)),[3 1 2]),2,[],size(K,1),size(K,2)),...
1),...
2));
Although your test case is too simple, so I can't be entirely sure that it's correct.
The idea is that we first take all the indices in A, in column-major order, then reshape the elements of K such that the first two dimensions are of size [2, n], and the second two dimensions are the original 2 of K. We then take the product, then the sum along the necessary dimensions, ending up with a matrix that has to be squeezed to get a 2d matrix.
Using a bit more informative test case:
K = rand(2,3,4);
A = randi(4,4,2);
L = zeros(2,3);%save result to here
for t=1:size(A,1)
L = L+prod(K(:,:,A(t,:)),3);
end
B = A.'; %'
L2 = squeeze(sum(prod(reshape(permute(K(:,:,B(:)),[3 1 2]),2,[],size(K,1),size(K,2)),1),2));
Then
>> isequal(L,L2)
ans =
1
With some reshaping magic -
%// Get sizes
[m1,n1,r1] = size(K);
[m2,n2] = size(A);
%// Index into 3rd dim of K; perform reductions and reshape back
Lout = reshape(sum(prod(reshape(K(:,:,A'),[],n2,m2),2),3),m1,n1);
Explanation :
Index into the third dimension of K with a transposed version of A (transposed because we are using rows of A for indexing).
Perform the prod() and sum() operations.
Finally reshape back to a shape same as K but without the third dimension as that was removed in the earlier reduction steps.
For some reason I can't get over this in Octave:
for i=1:n
y(2:(i+1))=y(2:(i+1))-x(i)*y(1:i)
end;
If I break it down in steps (suppose n=3), wouldn't the loop look like this:
i=1
y(2)=y(2)-x(1)*y(1)
i=2
y(2)=y(2)-x(2)*y(1)
y(3)=y(3)-x(2)*y(2)
i=3
y(2)=y(2)-x(3)*y(1)
y(3)=y(3)-x(3)*y(2)
y(4)=y(4)-x(3)*y(3)
Well, I must be wrong because the results are not good when doing the loop step by step, but for the life of me I can't figure it out where. Can someone please help me?
First of all, forgive me my styling, I never used matrices/vector representations in Stack Overflow before. Anyway I hope this gives you an idea of how it internally works:
x = [1,2,3]
y = [1,0,0,0]
Step 1:
the first loop will execute:
y(2)=y(2)-x(1)*y(1)
these are just scalar values, y(2) = 0, x(1) = 1, y(1) = 1.
So y(2) = 0-1*1 = -1, which means that the 2nd position in vector y will become -1.
resulting in y = [1,-1,0,0]
Step 2:
The next loop will execute
Here y(2,3) and y(1,2) are vectors of size 2, where the values are the ones that correspond with the position in y. After calculating the new vector [-3,2]
this will be assigned to the 2nd and 3th position in vector y. Resulting in the vector [1,-3,2,0].
Step 3:
Repeat the step 2 but this time use vectors of size 3, and replace the outcome with the 2,3,4 position in the y matrix results in the final vector y being: [1,-6,11,-6]
I have a n x m matrix of data.
How do I create a function that has a sum that includes elements of each column, such that if I input a value, I would get a 1 x m row (where m > 100)?
More specifically, I am computing a discrete Fourier transform of the data in each column that should work for any input frequency I put in.
Here is my code for a single column:
(* Length of time data *)
n = Length[t]
(* Compute discrete fourier transform at specified frequency f *)
DFT[f_] := (t[[2]] - t[[1]]) Sum[
mat[[i + 1]] * Exp[2 Pi I f mat[[i + 1]]], {i, 0, n - 1}];
I'd like to extend this to m columns so that if I want to compute the DFT for a given column at a specific frequency, I can just extract an element of a 1 x m row.
I've considered a function like Map, but it seems like it'll directly apply my function by inputting the value of each element in the row, which isn't exactly what I want.
I am guessing you meant you just want to map a function on a column?
mat = RandomInteger[{0, 10}, {5, 6}];
map[f_, mat_?(MatrixQ[#] &), c_Integer /; c > 0] := f /# mat[[All, c]]
map[f, mat, 2]
It seems like you just need to get the column. The way that matrices are stored in Mathematica has the first coordinate as the row and the second as the column. All coordinates start at 1, not 0. To get an element at a specific coordinate, you use matrix[[row, column]]. If you want a whole row, matrix[[row]]. If you want a column, matrix[[All, column]]. Accordingly, here is one way you might adjust the DFT function:
DFT[f_, list_] := (t[[2]] - t[[1]]) Sum[
list[[i]] * Exp[2 Pi I f list[[i]]], {i, 1, n}];
yourColumnDFT = DFT[f, matrix[[All, columnNumber]]]
In fact, you can make this even simpler by removing the call to Sum because these operations automatically map over lists by index:
DFT[f_, list_] := (t[[2]] - t[[1]]) Total[list Exp[2 Pi I f list]]
By the way, there is a built-in function for this, Fourier (documentation here), which gives a slightly different DFT than yours but is also useful. I recommend looking for built-in functions for these tasks in the future, because Mathematica has a wide range of functionality like this and will save you a lot of trouble.
I'm looking at the standard definition of the assignment problem as defined here
My question is to do with the two constraints (latex notation follows):
\sum_{j=1}^n(x_{ij}) = 1 for all i = 1, ... , n
\sum_{i=1}^n(x_{ij}) = 1 for all j = 1, ... , n
Specifically, why the second constraint required? Doesn't the first already cover all pairs of x_{ij}?
Consider the matrix x_ij with the i ranging over the rows, and j ranging over the columns.
The first equation says that for each i (that is, for each row!) the sum of values in that row equals 1.
The second equations says thta for each j (that is, for each column!) the sum of values in that column equals 1.
No. Given that all the entries in X are 0 or 1, one constraint says 'there is exactly one 1 in each column' - the other says 'there is exactly one 1 in each row' (I always forget which way round matrix subscripts conventionally go). These statements have independent truth values.
This is not even remotely a programming problem. But I'll answer it anyway.
The first is a sum over j, for EACH value of i. The second is a sum over i, for EACH value of j.
So essentially, one of these constraint sets requires that the sum across the rows of the matrix x_{i,j} matrix must be unity. The other constraint is a requirement that the sum down the columns of that matrix must be unity.
(edit) It seems that we are still not being clear here. Consider the matrix
[0 1]
[0 1]
One must agree that the sum across the rows of this matrix is 1 for each row. However, when you form the sum of the elements of the first column, it is zero, and the sum of the elements in the second column, we find 2.
Now, consider a different matrix.
[0 1]
[1 0]
See that here, the sum over the rows or down the columns is always 1.