Setup
MacOS 10.12.6
Python 2.7
Galsim 1.4.4
Goal
I want to insert two SEDs (Balge and Disk) and using them to make WFIRST PSFs.
So far I am able to import SEDs by applying galsim.SED()
and produce a PSF using wfirst.getPSF()
Problem
By applying the wfirst.getPSF(), I cannot take my desired SED into account.
I even tried:
I also tried using galsim.Bandpass()
More details:
Based on the recipe provided in Example(#13) One may produce PSF using wfirst.getPSF() and then convolve it the SED.
I followed this routine:
PSFs = wfirst.getPSF(SCAs=use_SCA, approximate_struts=True,_waves=10,logger=logger)
point = galsim.Gaussian(sigma=1.e-8, flux=1.)
star_sed = galsim.SED(lambda x:1, 'nm', 'flambda').withFlux(1.,filter_)
star = galsim.Convolve(point*star_sed, PSF)
I was wondering if there is an option in which we can take SED into account when we want to make the PSF.
-Thank you
The key point of confusion is that the PSF does not have an SED; only astronomical objects such as stars and galaxies have SEDs. The process that you've pointed to in demo13.py is the correct way to include an SED: you attach it to the astronomical object in question (in this case a star, but you could also assign an SED to a galaxy, or assign different SEDs to separate components of a galaxy).
So if you had achromatic galsim.GSObjects called bulge and disk for two galaxy components, and separate SEDs for each one (bulge_sed and disk_sed), a chromatic WFIRST PSF called psf, and a galsim.Bandpass filter, then you simply do:
galaxy = bulge*bulge_sed + disk*disk_sed
object = galsim.Convolve(galaxy, psf)
object.drawImage(bandpass=filter, scale=wfirst.pixel_scale)
The PSF for the bulge and disk will differ because the PSF is chromatic and you've given the bulge and disk different SEDs, so this should cover the use case that you've described. See demo12.py for more examples of how to use the chromatic functionality that you are asking about (especially example C in that demo is relevant to your question).
Related
Following most estimation commands in Stata (e.g. reg, logit, probit, etc.) one may access the estimates using the _b[ParameterName] syntax (or the synonymous _coef[ParameterName]). For example:
regress y x
followed by
di _b[x]
will display the estimate of the coefficient of x. di _b[_cons] will display the coefficient of the estimated intercept (assuming the regress command was successful), etc.
But if I use the nonlinear least squares command nl I (seemingly) have to do something slightly different. Now (leaving aside that for this example model there is absolutely no need to use a NLLS regression):
nl (y = {_cons} + {x}*x)
followed by (notice the forward slash)
di _b[/x]
will display the estimate of the coefficient of x.
Why does accessing parameter estimates following nl require a different syntax? Are there subtleties to be aware of?
"leaving aside that for this example model there is absolutely no need to use a NLLS regression": I think that's what you can't do here....
The question is about why the syntax is as it is. That's a matter of logic and a matter of history. Why a particular syntax was chosen is ultimately a question for the programmers at StataCorp who chose it. Here is one limited take on your question.
The main syntax for regression-type models grows out of a syntax designed for linear regression models in which by default the parameters include an intercept, as you know.
The original syntax for nonlinear regression models (in the sense of being estimated by nonlinear least-squares) matches a need to estimate a bundle of parameters specified by the user, which need not include an intercept at all.
Otherwise put, there is no question of an intercept being a natural default; no parameterisation is a natural default and each model estimated by nl is sui generis.
A helpful feature is that users can choose the names they find natural for the parameters, within the constraints of what counts as a legal name in Stata, say alpha, beta, gamma, a, b, c, etc. If you choose _cons for the intercept in nl that is a legal name but otherwise not special and just your choice; nl won't take it as a signal that it should flip into using regress conventions.
The syntax you cite is part of what was made possible by a major redesign of nl but it is consistent with the original philosophy.
That the syntax is different because it needs to be may not be the answer you seek, but I guess you'll get a fuller answer only from StataCorp; developers do hang out on Statalist, but they don't make themselves visible here.
I am currently doing a project on speaker verification using Hidden Markov Models. I chose MFCC for my feature extraction. I also intend to apply VQ to it. I have implemented HMM and tested it on Eisner's data spreadsheet found here: http://www.cs.jhu.edu/~jason/papers/ and got correct results.
Using voice signals, I seem to have missed something since I was not getting correct acceptance (I did the probability estimation using the forward algorithm - no scaling applied).I was wondering on what could have I done wrong. I used scikits talkbox's MFCC function for feature extraction and used Scipy's cluster for vector quantization. Here is what I have written:
from scikits.talkbox.features import mfcc
from scikits.audiolab import wavread
from scipy.cluster.vq import vq, kmeans, whiten
(data, fs) = wavread(file_name)[:2]
mfcc_features = mfcc(data, fs=fs)[0]
#Vector Quantization
#collected_feats is a list of spectral vectors taken together from 3 voice samples
random.seed(0)
collected_feats = whiten(collected_feats)
codebook = kmeans(collected_feats, no_clusters)[0]
feature = vq(mfcc_features, codebook)
#feature is then used as the observation for the hidden markov model
I assumed that the default parameters for scikits' mfcc function is already fit for speaker verification. The audio files are of sampling rates 8000 and 22050. Is there something I am lacking here? I chose a cluster of 64 for VQ. Each sample is an isolated word. at least 1 second in duration. I haven't found a Python function yet to remove the silences in the voice samples so I use Audacity to manually truncate the silence parts. Any help would be appreciated. Thanks!
Well I am not sure about HMM approach but I would recommend using GMM. ALize is a great library for doing that. For Silence removal, use the LIUM library. The process is called speaker diarization, the program detects where the speaker is speaking and gives the time stamp.
Has anyone managed to run an ordinary least squares regression in Vowpal Wabbit? I'm trying to confirm that it will return the same answer as the exact solution, i.e. when choosing a to minimize ||y - X a||_2 + ||Ra||_2 (where R is the regularization) I want to get the analytic answer
a = (X^T X + R^T R)^(-1) X^T y. Doing this type of regression takes about 5 lines in numpy python.
The documentation of VW suggests that it can do this (presumably the "squared" loss function) but so far I've been unable to get it to come even close to matching the python results. Becuase squared is the default loss function, I'm simply calling:
$ vw-varinfo input.txt
where input.txt has lines like
1.4 | 0:3.4 1:-1.2 2:4.0 .... etc
Do I need some other parameters in the VW call? I'm unable to grok the (rather minimal) documentation.
I think you should use this syntax (vowpal wabbit version 7.3.1):
vw -d input.txt -f linear_model -c --passes 50 --holdout_off --loss_function squared --invert_hash model_readable.txt
This syntax will instruct VW to read your input.txt file, write on disk a model record and a cache (necessary for multi-pass convergence) and fit a regression using the squared loss function. Moreover it will finally write the model coefficients in a readable fashion into a file called model_readable.txt.
The --holdout_off option is a recent additional one in order to suppress the out-of-sample automatic loss computation (if you are using an earlier version you have to remove it).
Basically a regression analysis based on stochastic gradient descent will provide you with a vector of coefficients similar to the exact solution only when no regularization is applied and when the number of passes is high (I would suggest 50 or even more, also randomly shuffling the input file rows would help the algorithm to converge better).
I'm trying to solve the following problem:
I'm analyzing an image and I obtain from this analysis a set of segments
I want to know the intersection of these lines (best fit)
I'm using for this opencv's function cvSolve. For reasonably good input everything works fine.
The problem that I have comes from the fact that when I have just a single bad segment as input the result is different from the one expected.
Details:
Upper left image show the "lonely" purple lines influencing the result (all lines are used as input).
Upper right image shows how a single purple line (one removed) can influence the result.
Lower left image show what we want - the intersection of lines as expected (both purple lines eliminated).
Lower right image show how the other purple line (the other is removed) can influence the result.
As you can see only two lines and the result is completely different from the one expected. Any ideas on how to avoid this are appreciated.
Thanks,
Iulian
The algorithm you are using finds, as described in the link, the least square error solution to the problem. This means that if there are more intersection points, the result will be an average (for a reasonable definition of average) of the real solutions.
I would try an iterative solution: if the error of the first solution is too large, remove from the set of segments the one farthest to the solution, and iterate until the error is acceptably small. This should remove one of the many intersection point, and converge on the one with most lines nearby.
A general answer to this kind of problems is the RANSAC algorithm (question dealing with this), however it has a few disadvantages, for example you need to estimate things like "the expected number of outliers" beforehand. Another Problem I see with your sample is that removing the two green lines also results in a pretty good fit, so that might be a more general problem.
you can solve using SVD incase line1 =(x1,y1)-(x2,y2) ; line2 =(x2,y2)-(x3,y3)
let Ax = b where;
A = [-(y2-y1) (x2-x1);
-(y3-y2) (x3-x2);
.................
.................] -->(nx2)
x = transpose[s t] -->(2x1)
b = [-(y2-y1)x1 + (x2-x1)y1 ;
-(y3-y2)x2 + (x3-x2)y2 ;
........................
........................] --> (nx1)
Example; Matlab Code
line1=[0,10;5,10]
line2=[10,0;10,5]
line3=[0,0;5,5]
A=[-(line1(2,2)-line1(1,2)),(line1(2,1)-line1(1,1));
-(line2(2,2)-line2(1,2)),(line2(2,1)-line2(1,1));
-(line3(2,2)-line3(1,2)),(line3(2,1)-line3(1,1))];
b=[(line1(1,1)*A(1,1))+ (line1(1,2)*A(1,2));
(line2(1,1)*A(2,1))+ (line2(1,2)*A(2,2));
(line3(1,1)*A(3,1))+ (line3(1,2)*A(3,2))];
[U D V] = svd(A)
bprime = U'*b
y=[bprime(1)/D(1,1);bprime(2)/D(2,2)]
x=V*y
This picture from Wikipedia has a nice example of the sort of functions I'd ideally like to generate:
Right now I'm using the Irwin-Hall Distribution, which is more or less a polynomial approximation of the Gaussian distribution...basically, you use uniform random number generator and iterate it x times, and take the average. The more iterations, the more like a Gaussian Distribution it is.
It's pretty nice; however I'd like to be able to have one where I can vary the mean. For example, let's say I wanted a number between the range 0 and 10, but around 7. Like, the mean (if I repeated this function multiple times) would turn out to be 7, but the actual range is 0-10.
Is there one I should look up, or should I work on doing some fancy maths with standard Gaussian distributions?
I see a contradiction in your question. From one side you want normal distribution which is symmetrical by it's nature, from other side you want the range asymmetrically disposed to mean value.
I suspect you should try to look at other distributions density functions of which are like bell curve but asymmetrical. Like log distribution or beta distribution.
Look into generating normal random variates. You can generate pairs of normal random variates X = N(0,1) and tranform it into ANY normal random variate Y = N(m,s) (Y = m + s*X).
Sounds like the Truncated Normal distribution is just what the doctor ordered. It is not "computationally simple" per se, but easy to implement if you have an existing implementation of a normal distribution.
You can just generate the distribution with the mean you want, standard deviation you want, and the two ends wherever you want. You'll have to do some work beforehand to compute the mean and standard deviation of the underlying (non-truncated) normal distribution to get the mean for the TN that you want, but you can use the formulae in that article. Also note that you can adjust the variance as well using this method :)
I have Java code (based on the Commons Math framework) for both an accurate (slower) and quick (less accurate) implementation of this distribution, with PDF, CDF, and sampling.