I currently have a set of 2D Cartesian coordinates e.g. {(1,3), (2,2), (3,4)}
Which will be put into a 2D array, to perform SVD properly would the matrix be put together such that the coordinates form the columns or the rows e.g.
1 3
2 2
3 4
or
1 2 3
3 2 4
I have been doing a little trial and error comparing to examples of SVD I have found online, the resulting matrix usually seems to be negated, with some of the values shuffled around.
To clarify further if I had a matrix E which was MxN as shown here http://upload.wikimedia.org/wikipedia/commons/b/bb/Matrix.svg
To define the matrix as a 2D array would it be Array[M][N] or Array[N][M]
I am assuming this actually matters due to matrix arithmetic not being commutative? Can anyone actually verify this?
This link describes how to create a matrix from a set of vectors
In order to create a matrix by
compounding vector like structures we
need to do two things to the 'inner
vector':
We need to take the transpose so that
it is a row rather than a column.
We need a multiplication operation which
will make it a field.
However this does not clarify the standards used for OpenCV and SVD.
Related
I'm new to Julia, and I am currently working on a model where I need to add a matrix to list of matrices. I am trying to accomplish this with:
push!(BranchDomainNew, BranchDomain[k])
Where BranchDomainNew is a 1x7 matrix (3D) made up of matrices. I am trying to append BranchDomain[k] (another matrix of the same dimensions) to this list. Ultimately, my goal is to have BranchDomainNew be 8 matrices long, with the last index containing BranchDomain[k].
Here's the error I keep getting:
MethodError: no method matching push!(::Matrix{Any}, ::Matrix{Bool})
I also tried using append!(), which unfortunately also did not work - I got the same error (except append! instead of push!). I'd love to know why these methods don't work for this, and how I can accomplish this goal. Also, I am working with version v"1.7.2". Thanks
You cannot push! or append! elements to a matrix, because matrices are 2-dimensional entities, and adding single elements could ruin its shape, and is therefore not allowed. You can instead concatenate rows or columns using hcat or vcat.
But it looks like what you really should use is a Vector, not a 1xN Matrix.
So make sure that BranchDomainNew is a Vector of matrices, instead of a Matrix of matrices. Then you can push! and append! all you like.
You did not show how you made your matrix, but it is possible that you did something like this:
BranchDomainNew = [mat1 mat2 mat3] # create 1x3 Matrix
when you should have done
BranchDomainNew = [mat1, mat2, mat3] # create length 3 Vector
It is a common mistake for many new Julia users to use 1xN or Nx1 matrices, when they should actually use a length-N vector. For example, they often initialize arrays as zeros(N, 1), when they should use zeros(N)
The difference is important, and in almost all cases a vector is better.
Assume that multiplying a matrix G1 of dimension p×q with another matrix G2 of dimension q×r requires pqr scalar multiplications. Computing the product of n matrices G1G2G3 ….. Gn can be done by parenthesizing in different ways. Define GiGi+1 as an explicitly computed pair for a given paranthesization if they are directly multiplied. For example, in the matrix multiplication chain G1G2G3G4G5G6 using parenthesization (G1(G2G3))(G4(G5G6)), G2G3 and G5G6 are only explicitly computed pairs.
Consider a matrix multiplication chain F1F2F3F4F5, where matrices F1,F2,F3,F4 and F5 are of dimensions 2×25,25×3,3×16,16×1 and 1×1000, respectively. In the parenthesization of F1F2F3F4F5 that minimizes the total number of scalar multiplications, the explicitly computed pairs is/are
F1F2 and F3F4 only
F2F3 only
F3F4 only
F2F3 and F4F5 only
=======================================================================
My approach - I want to solve this under one minute, but the only way I know is that to use Bottom up Dynamic Approach by making a table and the other thing I can conclude is we should multiply with F5 at last because it has 1000 in it's dimension.So, please how to develop fast intuition for this kind of question!
======================================================================
Correct answer is F3F4
The most important thing to note is the dimension 1×1000. You better watch out for it if you want to minimize the multiplications. OK, now we do know what we are looking for is basically multiply a small number with 1000.
Carefully examining if we go with F4F5, we would be multiplying 16x1x1000. But computing F3F4 first , the result matrix has dimension 3x1. So going with F3F4 we are able to get small numbers like 3,1 . So , no way im going with F4F5.
By similar logic I would not go with F2F3 and loose the smaller 3 and get bigger 25 and 16 to be later used with 1000.
OK, for F1F2, you can quickly find that (F1F2)(F3F4) is not better than
(F1(F2(F3F4))) . So the answer is F3F4
I am implementing Jacobi algorithms, to get eigenvectors of symmetric matrix. I don't understand why i gain different eigenvector from my applications (same result like mine here: http://fptchlx02.tu-graz.ac.at/cgi-bin/access.com?c1=0000&c2=0000&c3=0000&file=0638) and diffrent from Wolfram Aplha: http://www.wolframalpha.com/input/?i=eigenvector%7B%7B1%2C2%2C3%7D%2C%7B2%2C2%2C1%7D%2C%7B3%2C1%2C1%7D%7D
Example matrix:
1 2 3
2 2 1
3 1 1
My Result:
0.7400944496522529, 0.6305371413491765, 0.23384421945632447
-0.20230251371232585, 0.5403584533063043, -0.8167535949636785
-0.6413531776951003, 0.5571668060588798, 0.5274763043839444
Result from WA:
1.13168, 0.969831, 1
-1.15396, 0.315431, 1
0.443327, -1.54842, 1
I expect that solution is trivial, but i can't find it. I've asked this question on mathoverflow and they pointed me to this site.
Eigenvectors of a matrix are not unique, and there are multiple possible decompositions; in fact, only eigenspaces can be defined uniquely. Both results that you are receiving are valid. You can easily see that by asking Wolfram Alpha to orthogonalize the second matrix. Run the following query:
Orthogonalize[{{1.13168, 0.969831, 1.}, {-1.15396, 0.315431, 1.}, {0.443327, -1.54842, 1.}}]
to obtain
0.630537 0.540358 0.557168
-0.740094 0.202306 0.641353
0.233844 -0.816754 0.527475
Now you can see that your algorithm returns a correct result. First, the matrix is transposed: WA gave you row vectors, and your algorithm returns them in columns. Then, the first vector is multiplied by a -1, but any eigenvector can be multiplied by a non-zero constant to yield a valid eigenvector. Otherwise, the results perfectly match.
You may also find the following Mathematics StackExchange answer helpful: Are the eigenvectors of a real symmetric matrix always an orthonormal basis without change?
I would like to ask about Savitzky–Golay filter on 2D-images.
What are the best coefficient and order to choose for finding local details in the image.
Moreover, if someone has an explanation for coefficients and the orders one the 2D-images, it would be perfect.
Thanks in advance
Please check out this website:
https://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay_filter#Two-dimensional_convolution_coefficients
UPDATE: (Thank you for the suggestion, #Rasclatt)
Which has been reproduced here:
Two-dimensional smoothing and differentiation can also be applied to tables of data values, such as intensity values in a photographic image which is composed of a rectangular grid of pixels.[16] [17] The trick is to transform part of the table into a row by a simple ordering of the indices of the pixels. Whereas the one-dimensional filter coefficients are found by fitting a polynomial in the subsidiary variable, z to a set of m data points, the two-dimensional coefficients are found by fitting a polynomial in subsidiary variables v and w to a set of m × m data points. The following example, for a bicubic polynomial and m = 5, illustrates the process, which parallels the process for the one dimensional case, above.[18]
The square of 25 data values, d1 − d25
becomes a vector when the rows are placed one after another.
The Jacobian has 10 columns, one for each of the parameters a00 − a03 and 25 rows, one for each pair of v and w values. Each row has the form
The convolution coefficients are calculated as
The first row of C contains 25 convolution coefficients which can be multiplied with the 25 data values to provide a smoothed value for the central data point (13) of the 25.
check out the below links which use SURE(Stein's unbiased risk estimator) to minimizes the mean squared error between your estimate and the image. This method is useful for denoising and data smoothing.
this link is for optimization of parameters for 1D Savitzky Golay Filter(this will be helpful to understand the 2D part)
https://ieeexplore.ieee.org/abstract/document/6331560/?part=1
this link is for optimization of parameters of 2D Savitzky Golay Filter
https://ieeexplore.ieee.org/document/6738095/
Basically I have been trying to forge an understanding of matrix maths over the last few weeks and after reading (and re-reading) many maths heavy articles and documentation I think I have an adequate understanding, but I just wanted to make sure!
The definitions i have ended up with are:
/*
Minor
-----
-A determinant of a sub matrix
-The sub matrix used to calculate a minor can be obtained by removing more then one row/column from the original matrix
-First minors are minors of a sub matrix where only the row and column of a single element have been removed
Cofactor
--------
-The (signed) minor of a single element from a matrix
ie. the minor of element 2,3 is the determinant of the submatrix, of the matrix, defined by removing row 2 and column 3
Determinant
-----------
-1. Choose any single row or column from a Matrix.
2. For each element in the row/column, multiply the value of the element against the First Minor of that element.
3. This result is then multiplied by (-1 raised to the power of the elements row index + its column index) which will give the result of step 2 a sign.
4. You then simply sum all these results to get the determinant (a real number) for the Matrix.
*/
Please let me know of any holes in my understanding?
Sources
http://en.wikipedia.org /Cofactor_(linear_algebra) & /Minor_(linear_algebra) & /Determinant
http://easyweb.easynet.co.uk/~mrmeanie/matrix/matrices.htm
http://www.geometrictools.com/Documentation/LaplaceExpansionTheorem.pdf (the most helpful)
Geometric tools for computer graphics (this may have missing pages, i have the full copy)
Sounds like you understand determinants -- now go forth and write code! Try writing a solver for simultaneous linear equations in 3 or more variables, using Cramer's Rule.
Since you tagged this question 3dgraphics, matrix and vector multiplication might be a good area to explore next. They come up everywhere in 3d graphics programming.