I hope, anyone can help me with my question.
In a paper, factor loadings (inclusive uniqueness and complexity indices) are reported in an article. An oblique rotation was used..
To my knowledge, the factor loadings can be interpreted as unstandardized regression coefficients. Do anyone know if there is a formular to standardize the factor loadings?
Greetings
Related
I solved an analytically unsolvable problem with numerical methods. I am searching for X, based on a desired Y value. f(x)=y is possible, x=f^-1(y) is not.
Currently the algorithm does a binary search. It starts at X=50%, calculates Y, returns Y_err=Y-Y_demand. It keeps stepping by intervals of 5% in the direction of shrinking Y_err, until Y_err changes sign, then it reduces the step, and steps in the opposite direction. This works, but it's embarassingly slow & inefficient.
Below, an example chart of x=f^-1(y). I chose one with high coefficients for the nonlinear part.
Example chart of x=f^-1(y)
It varies depending on coefficients, but always has this pseudoparabolic shape. It's of course nonlinear and even 9th order polynomial approximations don't offer satisfactory precision.
For simplicity's sake let's say the inflecton point is at X=50%, and am looking only for solutions where X>50%.
How should I proceed? I'm looking to optimise as much as possible. What are some good algorithms? Thanks.
EDIT: Thank you for pointing out that this is not in fact a binary search. I've updated the code and now have much better results by comparison.
I'm not sure if Newton's method applies here, or at least I don't know how to apply it. One-way trial and error is all I can do. When I have some more time I will try to learn and implement regula falsi.
I'm teaching myself data structures, and am on a section giving a brief outline of time analysis. The following problem was given:
"Each of the following are formulas for the number of operations in some
algorithm. Express each formula in big-O notation."
The problem then goes on to give multiple scenarios. One was:
g.) The number of times that n can be divided by 10 before dropping below
1.0.
(Note: It doesn't state what n is exactly, so I'm assuming it's just some input size. But I don't think it matters in terms of how the problem is stated)
I reasoned that as this would relate to its order of magnitude, it should just be log n. However, the text says that it should be quadratic. Is there something I am missing?
Any help to help my thinking would be greatly appreciated.
I stumbled upon something, which I consider very strange.
As an example consider the code
A = reshape(1:6, 3,2)
A/[1 1]
which gives
3×1 Array{Float64,2}:
2.5
3.5
4.5
As I understand, in general such division gives the weighted average of columns, where each weight is inversely proportional to the corresponding element of the vector.
So my question is, why is it defined such way?
What is the mathematical justification of this definition?
It's the minimum error solution to |A - v*[1 1]|₂ – which, being overconstrained, has no exact solution in general (i.e. value v such that the norm is precisely zero). The behavior of / and \ is heavily overloaded, solving both under and overconstrained systems by a variety of techniques and heuristics. Whether this kind of overloading is a good idea or not is debatable, but it's what people have come to expect from these operations in Matlab and Octave, and it's often quite convenient to have so much functionality available in a single operator.
Let A be an NxN matrix and b be a Nx1 column vector. Then \ solves Ax=b, and / solves xA=b.
As Stefan mentions, this is extended to underdetermined cases as the least squares solution. This is done via the QR or SVD decompositions. See the details on these algorithms to see why this is the case. Hint: the linear form of the OLS estimator can actually be written as the solution to matrix decompositions, so it's the same thing.
Now you might ask, how does it actually solve it? That's a complicated question. Essentially, it uses a matrix factorization. But which matrix factorization is used is dependent on the matrix type. The reason for this is because Gaussian elimination is O(n^3), and so treating the problem generally is usually not good. But whenever you can specialize, you can get speedups. So essentially \ (and /, which transposes and calls \) check for a bunch of special types and pick a factorization or other algorithm (LU, QR, SVD, Cholesky, etc.) based on the matrix type. The flow chart from MATLAB explains this very well. There's a lot of details here, and it gets even more details when the matrix is sparse. Also IterativeSolvers.jl should be mentioned because it's another set of algorithms for solving Ax=b.
Most applied math problems reduce down to linear algebra, with solving Ax=b being one of the most important and difficult problems, which is why there is tons of research on the subject. In fact, you can probably say that the vast majority of the field of numerical linear algebra is devoted to finding fast methods for solving Ax=b on specific matrix types. \ essentially puts all of the direct (non-iterative) methods into one convenient operator.
Recently I began to study deconvolution algorithms and met the following acquisition model:
where f is the original (latent) image, g is the input (observed) image, h is the point spread function (degradation kernel), n is a random additive noise and * is the convolution operator.
If we know g and h, then we can recover f using Richardson-Lucy algorithm:
where , (W,H) is the size of rectangular support of h and multiplication and division are pointwise. Simple enough to code in C++, so I did just so. It turned out that approximates to f while i is less then some m and then it starts rapidly decay. So the algorithm just needed to be stopped at this m - the most satisfactory iteration.
If the point spread function g is also unknown then the problem is said to be blind, and the modification of Richardson-Lucy algorithm can be applied:
For initial guess for f we can take g, as before, and for initial guess for h we can take trivial PSF, or any simple form that would look similar to observed image degradation. This algorithm also works quit fine on the simulated data.
Now I consider the multiframe blind deconvolution problem with the following acquisition model:
Is there a way to develop Richardson-Lucy algorithm for solving the problem in this formulation? If no, is there any other iterative procedure for recovering f, that wouldn't be much more complicated than the previous ones?
According to your acquisition model, latent image (f) remains same while the observed images are different due to different psf and noise models. One way to look at it, is a motion-blur problem where a sharp and noise-free image(f) is corrupted by the motion blur kernel. As this is an ill-posed problem, in most of the literature it's solved iteratively by estimating the blur kernel and the latent image. The way you solve this depends entirely on your objective function.
For example in some papers IRLS is used to estimate the blur kernel. You can find a lot of literature on this.
If you want to use Richardson Lucy Blind deconvolution, then use it on just one frame.
One strategy can be in each iteration while recovering f, assign different weights for contribution from each g(observed images). You can incorporate different weights in the objective function or calculate them according to the estimated blur kernel.
Is there a way to develop Richardson-Lucy algorithm for solving the problem in this formulation?
I'm not a specialist in this area, but I don't think that such way to construct an algorithm exists, at least not straightforwardly. Here is my argument for this. The first problem you described (when the psf is known) is already ill-posed due to the random nature of the noise and loss of information about convolution near image edges. The second problem on your list — single-channel blind deconvolution — is the extention of the previous one. In this case in addition it's underdetermined, so the ill-posedness expands, and so it's natural that the method to solve this problem is developed from the method for solving the first problem. Now when we consider the multichannel blind deconvolution formulation, we add a bunch of additional information to our previous model and so the problem goes from underdetermined to overdetermined. This is the whole other kind of ill-posedness and hence different approaches to solution are required.
is there any other iterative procedure for recovering f, that wouldn't be much more complicated than the previous ones?
I can recommend the algorithm introduced by Šroubek and Milanfar in [1]. I'm not sure whether it's much more complicated on your opinion or not so much, but it's by far one of the most recent and robust. The formulation of the problem is precisely the same as you wrote. The algorithm takes as input K>1 number of images, the upper bound of the psf size L, and four tuning parameters: alpha, beta, gamma, delta. To specify gamma, for example, you will need to estimate the variance of the noise on your input images and take the largest variance var, then gamma = 1/var. The algorithm solves the following optimization problem using alternating minimization:
where F is the data fidelity term and Q and R are regularizers of the image and blurs, respectively.
For detailed analysis of the algorithm see [1], for a collection of different deconvolution formulation and their solutions see [2]. Hope it helps.
Referenses:
Filip Šroubek, Peyman Milanfar. —- Robust Multichannel Blind Deconvolution via Fast Alternating Minimization.
-— IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 4, APRIL 2012
Patrizio Campisi, Karen Egiazarian. —- Blind Image Deconvolution: Theory and Applications
We know that Genetic Algorithms (or evolutionary computation) work with an encoding of the points in our solution space Ω rather than these points directly. In the literature, we often find that GAs have the drawback : (1) since many chromosomes are coded into a similar point of Ω or similar chromosomes have very different points, the efficiency is quite low. Do you think that is really a drawback ? because these kind of algorithms uses the mutation operator in each iteration to diversify the candidate solutions. To add more diversivication we simply increase the probability of crossover. And we mustn't forget that our initial population ( of chromosones ) is randomly generated ( another more diversification). The question is, if you think that (1) is a drawback of GAs, can you provide more details ? Thank you.
Mutation and random initialization are not enough to combat the problem that is known as genetic drift which is the major problem of genetic algorithms. Genetic drift means that the GA may quickly lose most of its genetic diversity and the search proceeds in a way that is not beneficial for crossover. This is because the random initial population quickly converges. Mutation is a different thing, if it is high it will diversify, true, but at the same time it will prevent convergence and the solutions will remain at a certain distance to the optimum with higher probability. You will need to adapt the mutation probability (not the crossover probability) during the search. In a similar manner the Evolution Strategy, which is similar to a GA, adapts the mutation strength during the search.
We have developed a variant of the GA that is called OffspringSelection GA (OSGA) which introduces another selection step after crossover. Only those children will be accepted that surpass their parents' fitness (the better, the worse or any linearly interpolated value). This way you can even use random parent selection and put the bias on the quality of the offspring. It has been shown that this slows the genetic drift. The algorithm is implemented in our framework HeuristicLab. It features a GUI so you can download and try it on some problems.
Other techniques that combat genetic drift are niching and crowding which let the diversity flow into the selection and thus introduce another, but likely different bias.
EDIT: I want to add that the situation of having multiple solutions with equal quality might of course pose a problem as it creates neutral areas in the search space. However, I think you didn't really mean that. The primary problem is genetic drift, ie. the loss of (important) genetic information.
As a sidenote, you (the OP) said:
We know that Genetic Algorithms (or evolutionary computation) work with an encoding of the points in our solution space Ω rather than these points directly.
This is not always true. An individual is coded as a genotype, which can have any shape, such as a string (genetic algorithms) or a vector of real (evolution strategies). Each genotype is transformed into a phenotype when assessing the individual, i.e. when its fitness is calculated. In some cases, the phenotype is identical to the genotype: it is called direct coding. Otherwise, the coding is called indirect. (you may find more definitions here (section 2.2.1))
Example of direct encoding:
http://en.wikipedia.org/wiki/Neuroevolution#Direct_and_Indirect_Encoding_of_Networks
Example of indirect encoding:
Suppose you want to optimize the size of a rectangular parallelepiped dened by its length, height and width. To simplify the example, assume that these three quantities are integers between 0 and 15. We can then describe each of them using a 4-bit binary number. An example of a potential solution may be to genotype 0001 0111 01010. The corresponding phenotype is a parallelepiped of length 1, height 7 and width 10.
Now back to the original question on diversity, in addition to what DonAndre said you could read you read chapter 9 "Multi-Modal Problems and Spatial Distribution" of the excellent book Introduction to Evolutionary Computing written by A. E. Eiben and J. E. Smith. as well as a research paper on that matter such as Encouraging Behavioral Diversity in Evolutionary Robotics: an Empirical Study. In a word, diversity is not a drawback of GA, it is "just" an issue.