conditional probability sequential event - probability

if someone of you can show me how to obtain the coefficient of w_{k-1} in equation 2 below from the equation 1 above as shown in the Figure below?
Or how to calculate the multiplication on the right to obtain the expression on the left as shown in the Figure below.

Related

Average Upper and Lower Triangle Matrices

Is there an existing method in Matematica to average the corresponding elements in a lower left and upper right triangle matrix.
For example given the following matrix:
Which in Matematica form looks like:
{{1,2.2,3},{2.1,1,4},{2.5,2,1}}
I would like to get:
Which in Mathematica form would be:
{{1,0,0},{2.15,1,0},{2.75,3,1}}
I found the answer. There is not built-in function that I could find but using Transpose and addition/division operators I was able to easily come-up with a solution as follows:
mata={{1,2.2,3},{2.1,1,4},{2.5,2,1}};
matb=Transpose[mata];
mata=LowerTriangularize[mata];
matb=LowerTriangularize[matb];
avgmat=(mata+matb)/2;
MatrixForm[avgmat]

Kalman Filter Covariance does not increase in prediction step?

I have a extended Kalman Filter (EKF) and still struggle with the understanding of the covariance matrix P, which represents the uncertainty of the filter output.
As far as I understood: in the prediction step the covariance matrix will increase due to the noise Q and the uncertainity of the prediction represented by the term P = APA + Q.
In my case, A has a diagonal form and the values of A are all smaller than 1, resulting in smaller values of P after the prediction step. Thus the prediction results in a higher certainty.
Is that true? If yes can somebody explain it to me?
Thanks!
A has a diagonal form and the values of A are all smaller than 1
That means each variable in your state is predicted to be a fraction of its current value in the next step. The magnitude of the variable goes down, and so does its variance (as the square).

Showing two images with the same colorbar in log

I have two sparse matrices "Matrix1" and "Matrix2" of the same size p x n.
By sparse matrix I mean that it contains a lot of exactly zero elements.
I want to show the two matrices under the same colormap and a unique colorbar. Doing this in MATLAB is straightforward:
bottom = min(min(min(Matrix1)),min(min(Matrix2)));
top = max(max(max(Matrix1)),max(max(Matrix2)));
subplot(1,2,1)
imagesc(Matrix1)
colormap(gray)
caxis manual
caxis([bottom top]);
subplot(1,2,2)
imagesc(Matrix2)
colormap(gray)
caxis manual
caxis([bottom top]);
colorbar;
My problem:
In fact, when I show the matrix using imagesc(Matrix), it can ignore the noises (or backgrounds) that always appear with using imagesc(10*log10(Matrix)).
That is why, I want to show the 10*log10 of the matrices. But in this case, the minimum value will be -Inf since the matrices are sparse. In this case caxis will give an error because bottom is equal to -Inf.
What do you suggest me? How can I modify the above code?
Any help will be very appreciated!
A very important point is that the minimum value in your matrix will always be 0. Leveraging this, a very simple way to address your problem is to add 1 inside the log operation so that values that map to 0 in the original matrix also map to 0 in the log operation. This avoids the -Inf error that you're encountering. In fact, this is a very common way of visualizing the Fourier Transform if you will. Adding 1 to the logarithm ensures that the transform has no negative values in the output, yet the derivative or its rate of change remains intact as the effect is simply a translation of the curve by 1 unit to the left.
Therefore, simply do imagesc(10*log10(1 + Matrix));, then the minimum is always bounded at 0 while the maximum is unbounded but subject to the largest value that is seen in Matrix.

Eigenvector. Implementing Jacobi algorithm

I am implementing Jacobi algorithms, to get eigenvectors of symmetric matrix. I don't understand why i gain different eigenvector from my applications (same result like mine here: http://fptchlx02.tu-graz.ac.at/cgi-bin/access.com?c1=0000&c2=0000&c3=0000&file=0638) and diffrent from Wolfram Aplha: http://www.wolframalpha.com/input/?i=eigenvector%7B%7B1%2C2%2C3%7D%2C%7B2%2C2%2C1%7D%2C%7B3%2C1%2C1%7D%7D
Example matrix:
1 2 3
2 2 1
3 1 1
My Result:
0.7400944496522529, 0.6305371413491765, 0.23384421945632447
-0.20230251371232585, 0.5403584533063043, -0.8167535949636785
-0.6413531776951003, 0.5571668060588798, 0.5274763043839444
Result from WA:
1.13168, 0.969831, 1
-1.15396, 0.315431, 1
0.443327, -1.54842, 1
I expect that solution is trivial, but i can't find it. I've asked this question on mathoverflow and they pointed me to this site.
Eigenvectors of a matrix are not unique, and there are multiple possible decompositions; in fact, only eigenspaces can be defined uniquely. Both results that you are receiving are valid. You can easily see that by asking Wolfram Alpha to orthogonalize the second matrix. Run the following query:
Orthogonalize[{{1.13168, 0.969831, 1.}, {-1.15396, 0.315431, 1.}, {0.443327, -1.54842, 1.}}]
to obtain
0.630537 0.540358 0.557168
-0.740094 0.202306 0.641353
0.233844 -0.816754 0.527475
Now you can see that your algorithm returns a correct result. First, the matrix is transposed: WA gave you row vectors, and your algorithm returns them in columns. Then, the first vector is multiplied by a -1, but any eigenvector can be multiplied by a non-zero constant to yield a valid eigenvector. Otherwise, the results perfectly match.
You may also find the following Mathematics StackExchange answer helpful: Are the eigenvectors of a real symmetric matrix always an orthonormal basis without change?

Confirm I understand matrix determinants

Basically I have been trying to forge an understanding of matrix maths over the last few weeks and after reading (and re-reading) many maths heavy articles and documentation I think I have an adequate understanding, but I just wanted to make sure!
The definitions i have ended up with are:
/*
Minor
-----
-A determinant of a sub matrix
-The sub matrix used to calculate a minor can be obtained by removing more then one row/column from the original matrix
-First minors are minors of a sub matrix where only the row and column of a single element have been removed
Cofactor
--------
-The (signed) minor of a single element from a matrix
ie. the minor of element 2,3 is the determinant of the submatrix, of the matrix, defined by removing row 2 and column 3
Determinant
-----------
-1. Choose any single row or column from a Matrix.
2. For each element in the row/column, multiply the value of the element against the First Minor of that element.
3. This result is then multiplied by (-1 raised to the power of the elements row index + its column index) which will give the result of step 2 a sign.
4. You then simply sum all these results to get the determinant (a real number) for the Matrix.
*/
Please let me know of any holes in my understanding?
Sources
http://en.wikipedia.org /Cofactor_(linear_algebra) & /Minor_(linear_algebra) & /Determinant
http://easyweb.easynet.co.uk/~mrmeanie/matrix/matrices.htm
http://www.geometrictools.com/Documentation/LaplaceExpansionTheorem.pdf (the most helpful)
Geometric tools for computer graphics (this may have missing pages, i have the full copy)
Sounds like you understand determinants -- now go forth and write code! Try writing a solver for simultaneous linear equations in 3 or more variables, using Cramer's Rule.
Since you tagged this question 3dgraphics, matrix and vector multiplication might be a good area to explore next. They come up everywhere in 3d graphics programming.

Resources