healpy synfast: how to define the random seed - healpy

I'm using healpy.synfast to create maps, but it seems that healpy does not have the "iseed" function (as in here: http://healpix.jpl.nasa.gov/html/facilitiesnode14.htm) which let me define the random seed to be used for the generation of alms from the power spectrum.
Could anyone tell me how to achieve the "iseed" function in healpy? Thanks!

healpy internally uses np.random.standard_normal to generate the real and imaginary components of the alms, see sphtfunc.py.
Therefore you can use the numpy.random.seed function to set the seed, as:
numpy.random.seed(1234)
before running synfast.

Related

How to inject a zero-noise signal compact binary coalescence signal

Is it possible to inject a signal by itself with no coloured Gaussian noise?
Question asked by Arunava Mukherjee via email
Yes. There are two easy ways to do this.
1) Use the existing helper functions
When generating an interferometer object, bilby provides several helper routines denoted by bilby.gw.detector.get_interferometer_with.... In this case, you'll want to use this function (I've truncated the doctring)
bilby.gw.detector.get_interferometer_with_fake_noise_and_injection(
name, injection_parameters, injection_polarizations=None,
waveform_generator=None, sampling_frequency=4096, duration=4,
start_time=None, outdir='outdir', label=None, plot=True, save=True,
zero_noise=False)
Docstring:
Helper function to obtain an Interferometer instance with appropriate
power spectral density and data, given an center_time.
Note: by default this generates an Interferometer with a power spectral
density based on advanced LIGO.
Parameters
----------
name: str
Detector name, e.g., 'H1'.
...
zero_noise: bool
If true, set noise to zero.
So you just pass the flag in and it will create an interferometer with just the injection signal (you'll then need to make one for each interferometer you want in the list of interferometers passed in to the likelihood.
2) Use the low level set strain data methods
Alternatively, you may instead wish to use the low level methods themselves. As a general rule of thumb, you can always look at the source code for the generic helper functions to figure out how this should be done. Here, we create a H1 interferometer set the strain data with zero noise and inject a signal:
interferometer = get_empty_interferometer("H1")
interferometer.power_spectral_density = PowerSpectralDensity.from_aligo()
interferometer.set_strain_data_from_zero_noise(
sampling_frequency=sampling_frequency, duration=duration,
start_time=start_time)
injection_polarizations = interferometer.inject_signal(
parameters=injection_parameters,
waveform_generator=waveform_generator)
Information correct as of v.0.3.5

Random values in tensorflow

I want to generate random numbers within an activation function such that every time the activation function is called a random number is generated. I tried with random.uniform and with tf.random_uniform but it only generates a single random value when it's compiled and it doesn't change anymore. How can I make it update every time?
Funny fact:
When I create a variable using tf.Variable(random.uniform(1,2)) every time the function it's called the value is slightly larger, for instance:
1.22069513798
1.22072458267
1.22075247765
1.22077202797
Edit:
The function is very simple
Function:
def activation(tensor):
alpha = tf.Variable(random.uniform(1,2))
return alpha*tensor,alpha
I will omit all the lines in the neural network, but I simply call it as:
act,alpha = activation(dense_layer+bias)
I later get the value by simply:
[ts,c,alph]=sess.run([train_step,cost,alpha], feed_dict={xi: x_raw, yi: y_raw})
Thanks
Hard to tell without source code, but maybe you are initializing your variable with that random value and reusing same value?
Another possibility:

ROC on multiple test sets in h2o (python)

I had a use-case that I thought was really simple but couldn't find a way to do it with h2o. I thought you might know.
I want to train my model once, and then evaluate its ROC on a few different test sets (e.g. a validation set and a test set, though in reality I have more than 2) without having to retrain the model. The way I know to do it now requires retraining the model each time:
train, valid, test = fr.split_frame([0.2, 0.25], seed=1234)
rf_v1 = H2ORandomForestEstimator( ... )
rf_v1.train(features, var_y, training_frame=train, validation_frame=valid)
roc = rf_v1.roc(valid=1)
rf_v1.train(features, var_y, training_frame=train, validation_frame=test) # training again with the same training set - can I avoid this?
roc2 = rf_v1.roc(valid=1)
I can also use model_performance(), which gives me some metrics on an arbitrary test set without retraining, but not the ROC. Is there a way to get the ROC out of the H2OModelMetrics object?
Thanks!
You can use the h2o flow to inspect the model performance. Simply go to: http://localhost:54321/flow/index.html (if you changed the default port change it in the link); type "getModel "rf_v1"" in a cell and it will show you all the measurements of the model in multiple cells in the flow. It's quite handy.
If you are using Python, you can find the performance in your IDE like this:
rf_perf1 = rf_v1.model_performance(test)
and then print the ROC like this:
print (rf_perf1.auc())
Yes, indirectly. Get the TPRs and FPRs from the H2OModelMetrics object:
out = rf_v1.model_performance(test)
fprs = out.fprs
tprs = out.tprs
roc = zip(fprs, tprs)
(By the way, my H2ORandomForestEstimator object does not seem to have an roc() method at all, so I'm not 100% sure that this output is in the exact same format. I'm using h2o version 3.10.4.7.)

Spark RDD into Matrix

I have an RDD like:
(A,AA,1)
(A,BB,0)
(A,CC,0)
(B,AA,2)
(B,BB,1)
(B,CC,4)
and I want to convert it into the following RRD:
([1,0,0],[2,1,4])
the order is important for me since the main propose is using RowMatrix to convert the second RDD to a matrix.
Your need to be careful with the wording, when you ask for a Matrix, do you mean something like the spark.mllib.matrix ? If so, you will need to follow very specific instructions to create one. However, it seems to me that your problem can be solved in a much easier way. Just using zipWithIndex with groupBy
//Here is how I see it
val test = sc.parallelize(Array(("A","AA",1),("A","BB",0),("A","CC",0),("B","AA",2),("B","BB",1),("B","CC",4))).zipWithIndex
val grouptest = test.groupBy(_._1._1).map(x=>(Vectors.dense(x._2.map(y=>(y._2,y._1._3)).toArray.sortBy(_._1).map(z=>z._2.toDouble))))
In your example, you seem to want the result as a vector? So I used spark's Vector (which by the way, only allows Doubles).
Result looks like:
[1.0,0.0,0.0]
[2.0,1.0,4.0]

Generate random number in vdm++

Does anyone know how to generate a random number in vdm++? The math library doesn't work for me.
You should be able to use the random generator in VDM (Both in VDMTools and Overture).
In Overture the argument must be larger than 0 and the seed must be set, which it is by default. Remember to include the standard MATH lib by selecting the project in the explorer and New->Add VDM Library and selecting MATH.
It can be called like this: MATH.rand(100) which will return a number between 0 and 100.
The seed can be changed through MATH.srand(5) it returns the seed set.

Resources