Savitzky–Golay filter for 2D images - image

I would like to ask about Savitzky–Golay filter on 2D-images.
What are the best coefficient and order to choose for finding local details in the image.
Moreover, if someone has an explanation for coefficients and the orders one the 2D-images, it would be perfect.
Thanks in advance

Please check out this website:
https://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay_filter#Two-dimensional_convolution_coefficients
UPDATE: (Thank you for the suggestion, #Rasclatt)
Which has been reproduced here:
Two-dimensional smoothing and differentiation can also be applied to tables of data values, such as intensity values in a photographic image which is composed of a rectangular grid of pixels.[16] [17] The trick is to transform part of the table into a row by a simple ordering of the indices of the pixels. Whereas the one-dimensional filter coefficients are found by fitting a polynomial in the subsidiary variable, z to a set of m data points, the two-dimensional coefficients are found by fitting a polynomial in subsidiary variables v and w to a set of m × m data points. The following example, for a bicubic polynomial and m = 5, illustrates the process, which parallels the process for the one dimensional case, above.[18]
The square of 25 data values, d1 − d25
becomes a vector when the rows are placed one after another.
The Jacobian has 10 columns, one for each of the parameters a00 − a03 and 25 rows, one for each pair of v and w values. Each row has the form
The convolution coefficients are calculated as
The first row of C contains 25 convolution coefficients which can be multiplied with the 25 data values to provide a smoothed value for the central data point (13) of the 25.

check out the below links which use SURE(Stein's unbiased risk estimator) to minimizes the mean squared error between your estimate and the image. This method is useful for denoising and data smoothing.
this link is for optimization of parameters for 1D Savitzky Golay Filter(this will be helpful to understand the 2D part)
https://ieeexplore.ieee.org/abstract/document/6331560/?part=1
this link is for optimization of parameters of 2D Savitzky Golay Filter
https://ieeexplore.ieee.org/document/6738095/

Related

openGL, the inverse of the transformations

If I have 3 different matrices, one for rotation (R), one for translation(T), and one for scaling (S), how to achieve the reverse effect of these matrices by manipulating the one caused it?
What I gathered so far is if I transposed the rotation matrix, I will achieve what I desire (is this correct?). And what about the other two?
And if there is a common way, are there any special cases where these ways will not suffice?
The inverse of the rotation matrix R is indeed its transpose RT.
The inverse of the scaling matrix S is easy as it contains only diagonal elements (the first three rows, as the last row is always equals to (0 0 0 1))
So you just replace each diagonal si with 1/si.
Finally the translation matrix T is an identity matrix with the translation vector on the last column. The inverse is achieved by replacing those elements with their negative.
Also the inverse of the product of 3 matrices, is the reverse product
(S T R)-1 = R-1 T-1 S-1

How does Principle Component Initialization work for determining the weights of the map vectors in Self Organizing Maps?

I studied on a fundamental SOM initialization and was looking to understand exactly how this process, PCI, works for initializing weight vectors on the map. My understanding is that for a two dimensional Map, this initialization method looks at the eigenvectors for the two largest eigenvalues of the data matrix and then uses the subspace spanned by these eigenvectors to initialize the map. Does that mean that in order to get the initial map weights, does this method take random linear combinations of the largest two eigenvectors in order to generate the map weights? Is there a patten?
For example, for 40 input data vectors on the map, does the lininit initialization method take combinations a1*[e1] + a2*[e2] where [e1] and [e2] are the two largest eigenvectors and a1 and a2 are random integers ranging from -3 to 3? Or is there a different mechanism? I was looking to make sure I knew exactly how lininit takes the two largest eigenvectors of the input data matrix and uses them to construct the initial weight vectors for the map.
The SOM creates a map that has the neighbourhood relationship between nearby nodes. Random initialisation does not help this process, since the nodes start randomly. Therefore, the idea of using the PCA initialisation is just a shortcut to get the map closer to the final state. This saves a lot of computation.
So how does this work? The first two principal components (PCs) are used. Set the initial weights as linear combination of the PCs. Rather than using random a1 and a2, the weights are set in a range that corresponds to the scale of the principal components.
For example, for a 5x3 map, a1 and a2 can both be in the range (-1, 1) with the relevant number of elements. In other words, for the 5x3 map, a1 = [-1.0 -0.5 0.0 0.5 1.0] and a2 = [-1.0 0.0 1.0], with 5 nodes and 3 nodes, respectively.
Then set each of the weights of nodes. For a rectangular SOM, each node has indices [m, n]. Use the values of a1[m] and a2[n]. Thus, for all m = [1 2 3 4 5] and n = [1 2 3]:
weight[m, n] = a1[m] * e1 + a2[n] * e2
That is how to initialize the weights using the principal components. This makes the initial state globally ordered, so now the SOM algorithm is used to create the local ordering.
The Principal Component part of the name is a reference to https://en.wikipedia.org/wiki/Principal_component_analysis.
Here is the idea. You start with data points placed at vectors of many underlying factors. But they may be correlated in your data. So, for example, if you're measuring height, weight, blood pressure, etc, you expect that tall people will weigh more. But what you want to do is replace this with vectors of factors that are not correlated with each other in your data.
So your principal component is a vector of length 1 which is as strongly correlated as possible with the variation in your dataset.
Your secondary component is the vector of length 1 at right angles to the first which is as strongly correlated as possible with the rest of the variation in your data set.
Your tertiary component is the vector of length 1 at right angles to the first two which is as strongly correlated as possible with the rest of the variation in your data set.
And so on.
In practice you may start with many factors, but most of the information is captured in just the first few. For example in the results of intelligence testing the first component is IQ and the second is the difference between how you are at verbal and quantitative reasoning.
How this applies to SOM initialization is that a simple linear model built off of PCA analysis is a pretty good guess for the answer that you're looking for, so starting there reduces how much work you have to do to finish getting the answer.

Difference between observations and variables in Matlab

I'm kind of ashamed to even ask this but here goes. In every Matlab help file where the input matrix is a NxD matrix X Matlab describes the matrix arrangement as
Data, specified as a numeric matrix. The rows of X correspond to
observations, and the columns correspond to variables.
Above taken from help of kmeans
I'm kind of confused as to what does Matlab mean by observations and variables.
Suppose I have a data matrix composed of 100 images. Each image is represented by a feature vector of size 128 x 1. So here is 100 my observations and 128 the variables or is it the other way around?
Will my data matrix be of the size 128 x 100 or 100 x 128
Eugene's explanation in a statistical and probability construct is great, but I would like to explain it more in the viewpoint of data analysis and image processing.
Think of an observation as one sample from your data set. In this case, one observation is one image. For each sample, it has some dimensionality associated to it or a number of variables used to represent such a sample.
For example, if we had a set of 100 2D Cartesian points, the amount of observations is 100, while the dimensionality or the total number of variables used to describe the point is 2: We have a x point and a y point. As such, in the MATLAB universe, we'd place all of these data points into a single matrix. Each row of the matrix denotes one point in your data set. Therefore, the matrix you would create here is 100 x 2.
Now, go back to your problem. We have 100 images and each image can be expressed by 128 features. This suspiciously looks like you are trying to use SIFT or SURF to represent an image so think of this situation where each image can be described by a 128-dimensional vector, or a histogram with bins of 128 elements. Each feature is part of the dimensionality makeup that makes up the image. Therefore, you would have a 100 x 128 matrix. Each row represents one image, where each image is represented as a 1 x 128 feature vector.
In general, MATLAB's machine learning and data analysis algorithms assume that your matrix is M x N, where M is the total number of points that make up your data set while N is the dimensionality of one such point in your data set. In MATLAB's universe, the total number of observations is equal to the total number of points in your data set, while the total number of features / distinct attributes to represent one sample is the total number of variables.
tl:dr
Observation: One sample from your data set
Variable: One feature / attribute that helps describe an observation or sample in your data set.
Number of observations: Total number of points in your data set
Number of variables: Total number of features / attributes that make up an observation or sample in your data set.
It looks like you are talking about some specific statistical/probabilistic functions. In statistics or probability theory there are some random variables that are results of some kind of measurements/observations over time (or some other dimension). So such a matrix is just a collection of N measurements of D different random variables.

algorithm for grouping elements from multi dimensional matrix

Hi fellows
I have a question in regards to an algorithm problem I am trying to solve.
The input data to the algorithm is a matrix that can be described as follows:
The matrix has N dimensions, like the one in the attached picture (which has 2 dimensions for example).
There are a number of elements (denoted as c1, .. c5 in the attached sample matrix) that are mapped into the cells of the matrix.
An element can occur in one or more cells in the matrix.
The algorithm should work as follows:
Pick (and then remove) elements from the matrix and group them into buckets of E elements, so that:
the elements in each bucket don't have any common values on any of the dimensions
each element is assigned only into one bucket
For example, referring to the sample matrix attached, if we selected E to be 2, then a possible grouping could be: {c1, c2} ; {c3, c4} and {5} by itself. Check: c1 and c2 (and c3 and c4) don't have any common column or row values in the matrix.
The problem has been doing my head in for some time now and I thought it may be worth asking if some whiz here has a solution.
Thanks

Principal component analysis m-by-n matrix implementation

Does anyone know how to implement the Principal component analysis (PCA) on a m-by-n matrix in matlab for normalization?
Assuming each column is a sample (that is, you have n samples each of dimension m), and it's stored in a matrix A you first have to subtract off the column means:
Amm = bsxfun(#minus,A,mean(A,2));
then you want to do an eigenvalue decomposition on 1/size(Amm,2)*Amm*Amm' (you can use 1/(size(Amm,2)-1) as a scale factor if you want an interpetation as an unbiased covariance matrix) with:
[v,d] = eig(1/size(Amm,2)*Amm*Amm');
And the columns of v are going to be your PCA vectors. The entries of d are going to be your corresponding "variances".
However, if your m is huge then this is not the best way to go because storing Amm*Amm' is not practical. You want to instead compute:
[u,s,v] = svd(1/sqrt(size(Amm,2))*Amm,'econ');
This time u contains your PCA vectors. The entries of s are related to the entries of d by a sqrt.
Note: there's another way to go if m is huge, i.e. computing eig(1/size(Amm,2)*Amm
'*Amm); (notice the switch of transposes as compared to above) and doing a little trickery, but it's a longer explanation so I won't get into it.

Resources