Related
I have some folders containing many jpg pictures (number depends on the folder)
I would like for instance to combine every 4 pict** together with the title of the image (see pict below).
(In case there is not exactly 4 image on the last sequence, I should get the number of left picture such as 3 2 or 1)
**Ideally I could change that number to other numbers like 5 6 10 (the number I chose would depend on the context) and I could chose the number of columns (I showed 2 column in my example below)
How can i perform this on Linux command or any Linux free/open-source software?
As I did not find what I want I created my own python code to solve this (it's probably not the most perfects script of the century but it works)
"""
Prints a collage according to desired number of column and rows with title of file
Instruction
1. Put all jpg picture in same folder [tested sucessfully on 12mb per pict]
2. select desired columns in NO_COL
3. select desired rowsin in NO_ROW
4. run the script which will output the collage with <cur_date>_export.png files
"""
#import libraries
import time
import os
import imageio as iio
from matplotlib import pyplot as plt
def render_collage(pict_file_name_list):
""" create one collage """
fig = plt.figure(figsize=(40, 28)) #change if needed
cnt = 1
for cur_img_name in pict_file_name_list:
img_var = iio.imread(cur_img_name)
fig.add_subplot(NO_COL, NO_ROW, cnt)
plt.imshow(img_var)
plt.axis('off')
plt.title(cur_img_name, fontsize = 30) #change if needed
cnt = cnt + 1
cur_date = time.strftime("%Y-%m-%d--%H-%M-%s")
fig.savefig(cur_date+'_export.png')
NO_COL = 3
NO_ROW = 3
NBR_IMG_COLLAGE = NO_COL * NO_ROW
img_list_name = [elem for elem in os.listdir() if 'jpg' in elem] #keep only file having .jpg
while len(img_list_name) >= 1:
sub_list = img_list_name[:NBR_IMG_COLLAGE]
render_collage(sub_list)
del img_list_name[:NBR_IMG_COLLAGE]
I am working on arima modeling. The data has hourly granularity - taken from 1st May 2022 till 8th June 2022. I am trying to do forecasting for next 30 days i.e 720 hours. I am facing trouble & getting confused with the below doubts. If anybody could provide pointers then it will be great.
Tried plotting the raw data & found no trend, and seasonality
a) Checked with seasonal_decomposition() with a few period values with period=1 (correct with my understanding that season should be 0)
b) period = 12 (just random - but why it is showing some seasons?. Even if I pot without period for which default value is 7, it still shows season - why?)
Plotted this graph with seasonality value False as in the raw plot I do not see any seasons/trend & getting the below plot. How & what should be concluded???
Then I thought of capturing this season thing through resampling by plotting daily graph and getting further confused.
a) period - 7 (default for seasonal_decomposition), again I can see seasonality of 4 days when the raw plot do not show seasons.
The forecasting for this resampled (daily) data is below
I am extremely clueless now as to what to see. The more I am reading the more I am getting confused.
Below is the code that I am using.
df=pd.read_csv('~/Desktop/gru-scl/gru-scl-filtered.csv', index_col="time")
del df["Index"]
df.index=pd.to_datetime(df.index)
model = pm.auto_arima(df.bps, start_p=0, start_q=0,
test='adf', # use adftest to find optimal 'd'
max_p=3, max_q=3, # maximum p and q
m=24, # frequency of series
d=None, # let model determine 'd'
seasonal=False, # No Seasonality
start_P=0,
D=0,
trace=True,
error_action='ignore',
suppress_warnings=True,
stepwise=True)
f_steps=720
fc, confint = model.predict(n_periods=f_steps, return_conf_int=True)
fc_index = np.arange(len(df.bps), len(df.bps)+f_steps)
val=0
for f in fc:
val = val+f
mean = val/f_steps
print(mean)
# make series for plotting purpose
fc_series = pd.Series(fc, index=fc_index)
lower_series = pd.Series(confint[:, 0], index=fc_index)
upper_series = pd.Series(confint[:, 1], index=fc_index)
# Plot
plt.plot(df.bps, label="Actual values")
plt.plot(fc, color='darkgreen', label="Predicted values")
plt.fill_between(fc_index,
lower_series,
upper_series,
color='k', alpha=.15)
plt.legend(loc='upper left', fontsize=8)
plt.title('Forecast vs Actuals')
plt.xlabel("Hours since 1st May 2022")
plt.ylabel("Bps")
plt.show()
I'm trying to learn more about pyhf and my understanding of what the goals are might be limited. I would love to fit my HEP data outside of ROOT, but I could be imposing expectations on pyhf which are not what the authors intended for it's use.
I'd like to write myself a hello-world example, but I might just not know what I'm doing. My misunderstanding could also be gaps in my statistical knowledge.
With that preface, let me explain what I'm trying to explore.
I have some observed set of events for which I calculate some observable and make a binned histogram of that data. I hypothesize that there are two contributing physics processes, which I call signal and background. I generate some Monte Carlo samples for these processes and the theorized total number of events is close to, but not exactly what I observe.
I would like to:
Fit the data to this two process hypothesis
Get from the fit the optimal values for the number of events for each process
Get the uncertainties on these fitted values
If appropriate, calculate an upper limit on the number of signal events.
My starter code is below, where all I'm doing is an ML fit but I'm not sure where to go. I know it's not set up to do what I want, but I'm getting lost in the examples I find on RTD. I'm sure it's me, this is not a criticism of the documentation.
import pyhf
import numpy as np
import matplotlib.pyplot as plt
nbins = 15
# Generate a background and signal MC sample`
MC_signal_events = np.random.normal(5,1.0,200)
MC_background_events = 10*np.random.random(1000)
signal_data = np.histogram(MC_signal_events,bins=nbins)[0]
bkg_data = np.histogram(MC_background_events,bins=nbins)[0]
# Generate an observed dataset with a slightly different
# number of events
signal_events = np.random.normal(5,1.0,180)
background_events = 10*np.random.random(1050)
observed_events = np.array(signal_events.tolist() + background_events.tolist())
observed_sample = np.histogram(observed_events,bins=nbins)[0]
# Plot these samples, if you like
plt.figure(figsize=(12,4))
plt.subplot(1,3,1)
plt.hist(observed_events,bins=nbins,label='Observations')
plt.legend()
plt.subplot(1,3,2)
plt.hist(MC_signal_events,bins=nbins,label='MC signal')
plt.legend()
plt.subplot(1,3,3)
plt.hist(MC_background_events,bins=nbins,label='MC background')
plt.legend()
# Use a very naive estimate of the background
# uncertainties
bkg_uncerts = np.sqrt(bkg_data)
print("Defining the PDF.......")
pdf = pyhf.simplemodels.hepdata_like(signal_data=signal_data.tolist(), \
bkg_data=bkg_data.tolist(), \
bkg_uncerts=bkg_uncerts.tolist())
print("Fit.......")
data = pyhf.tensorlib.astensor(observed_sample.tolist() + pdf.config.auxdata)
bestfit_pars, twice_nll = pyhf.infer.mle.fit(data, pdf, return_fitted_val=True)
print(bestfit_pars)
print(twice_nll)
plt.show()
Note: this answer is based on pyhf v0.5.2.
Alright, so it looks like you've managed to figure most of the big pieces for sure. However, there's two different ways to do this depending on how you prefer to set things up. In both cases, I assume you want an unconstrained fit and you want to...
fit your signal+background model to observed data
fit your background model to observed data
First, let's discuss uncertainties briefly. At the moment, we default to numpy for the tensor background and scipy for the optimizer. See documentation:
numpy backend
scipy optimizer
However, one unfortunate drawback right now with the scipy optimizer is that it cannot return the uncertainties. What you need to do anywhere in your code before the fit (although we generally recommend as early as possible) is to use the minuit optimizer instead:
pyhf.set_backend('numpy', 'minuit')
This will get you the nice features of being able to get the correlation matrix, the uncertainties on the fitted parameters, and the hessian -- amongst other things. We're working to make this consistent for scipy as well, but this is not ready right now.
All optimizations go through our optimizer API which you can currently view through the mixin here in our documentation. Specifically, the signature is
minimize(
objective,
data,
pdf,
init_pars,
par_bounds,
fixed_vals=None,
return_fitted_val=False,
return_result_obj=False,
do_grad=None,
do_stitch=False,
**kwargs)
There are a lot of options here. Let's just focus on the fact that one of the keyword arguments we can pass through is return_uncertainties which will change the bestfit parameters by adding a column for the fitted parameter uncertainty which you want.
1. Signal+Background
In this case, we want to just use the default model
result, twice_nll = pyhf.infer.mle.fit(
data,
pdf,
return_uncertainties=True,
return_fitted_val=True
)
bestfit_pars, errors = result.T
2. Background-Only
In this case, we need to turn off the signal. The way we do this is by setting the parameter of interest (POI) fixed to 0.0. Then we can get the fitted parameters for the background-only model in a similar way, but using fixed_poi_fit instead of an unconstrained fit:
result, twice_nll = pyhf.infer.mle.fixed_poi_fit(
0.0,
data,
pdf,
return_uncertainties=True,
return_fitted_val=True
)
bestfit_pars, errors = result.T
Note that this is quite simply a quick way of doing the following unconstrained fit
bkg_params = pdf.config.suggested_init()
fixed_params = pdf.config.suggested_fixed()
bkg_params[pdf.config.poi_index] = 0.0
fixed_params[pdf.config.poi_index] = True
result, twice_nll = pyhf.infer.mle.fit(
data,
pdf,
init_pars=bkg_params,
fixed_params=fixed_params,
return_uncertainties=True,
return_fitted_val=True
)
bestfit_pars, errors = result.T
Hopefully that clarifies things up more!
Giordon's solution should answer all of your question, but I thought I'd also write out the code to basically address everything we can.
I also take the liberty of changing some of your values a bit so that the signal isn't so strong that the observed CLs value isn't far off to the right of the Brazil band (the results aren't wrong obviously, but it probably makes more sense to be talking about using the discovery test statistic at that point then setting limits. :))
Environment
For this example I'm going to setup a clean Python 3 virtual environment and then install the dependencies (here we're going to be using pyhf v0.5.2)
$ python3 -m venv "${HOME}/.venvs/question"
$ . "${HOME}/.venvs/question/bin/activate"
(question) $ cat requirements.txt
pyhf[minuit,contrib]~=0.5.2
black
(question) $ python -m pip install -r requirements.txt
Code
While we can't easily get the best fit value for both the number of signal events as well as the background events we definitely can do inference to get the best fit value for the signal strength.
The following chunk of code (which is long only because of the visualization) should address all of the points of your question.
# answer.py
import numpy as np
import pyhf
import matplotlib.pyplot as plt
import pyhf.contrib.viz.brazil
# Goals:
# - Fit the model to the observed data
# - Infer the best fit signal strength given the model
# - Get the uncertainties on the best fit signal strength
# - Calculate an 95% CL upper limit on the signal strength
def plot_hist(ax, bins, data, bottom=0, color=None, label=None):
bin_width = bins[1] - bins[0]
bin_leftedges = bins[:-1]
bin_centers = [edge + bin_width / 2.0 for edge in bin_leftedges]
ax.bar(
bin_centers, data, bin_width, bottom=bottom, alpha=0.5, color=color, label=label
)
def plot_data(ax, bins, data, label="Data"):
bin_width = bins[1] - bins[0]
bin_leftedges = bins[:-1]
bin_centers = [edge + bin_width / 2.0 for edge in bin_leftedges]
ax.scatter(bin_centers, data, color="black", label=label)
def invert_interval(test_mus, hypo_tests, test_size=0.05):
# This will be taken care of in v0.5.3
cls_obs = np.array([test[0] for test in hypo_tests]).flatten()
cls_exp = [
np.array([test[1][idx] for test in hypo_tests]).flatten() for idx in range(5)
]
crossing_test_stats = {"exp": [], "obs": None}
for cls_exp_sigma in cls_exp:
crossing_test_stats["exp"].append(
np.interp(
test_size, list(reversed(cls_exp_sigma)), list(reversed(test_mus))
)
)
crossing_test_stats["obs"] = np.interp(
test_size, list(reversed(cls_obs)), list(reversed(test_mus))
)
return crossing_test_stats
def main():
np.random.seed(0)
pyhf.set_backend("numpy", "minuit")
observable_range = [0.0, 10.0]
bin_width = 0.5
_bins = np.arange(observable_range[0], observable_range[1] + bin_width, bin_width)
n_bkg = 2000
n_signal = int(np.sqrt(n_bkg))
# Generate simulation
bkg_simulation = 10 * np.random.random(n_bkg)
signal_simulation = np.random.normal(5, 1.0, n_signal)
bkg_sample, _ = np.histogram(bkg_simulation, bins=_bins)
signal_sample, _ = np.histogram(signal_simulation, bins=_bins)
# Generate observations
signal_events = np.random.normal(5, 1.0, int(n_signal * 0.8))
bkg_events = 10 * np.random.random(int(n_bkg + np.sqrt(n_bkg)))
observed_events = np.array(signal_events.tolist() + bkg_events.tolist())
observed_sample, _ = np.histogram(observed_events, bins=_bins)
# Visualize the simulation and observations
fig, ax = plt.subplots()
fig.set_size_inches(7, 5)
plot_hist(ax, _bins, bkg_sample, label="Background")
plot_hist(ax, _bins, signal_sample, bottom=bkg_sample, label="Signal")
plot_data(ax, _bins, observed_sample)
ax.legend(loc="best")
ax.set_ylim(top=np.max(observed_sample) * 1.4)
ax.set_xlabel("Observable")
ax.set_ylabel("Count")
fig.savefig("components.png")
# Build the model
bkg_uncerts = np.sqrt(bkg_sample)
model = pyhf.simplemodels.hepdata_like(
signal_data=signal_sample.tolist(),
bkg_data=bkg_sample.tolist(),
bkg_uncerts=bkg_uncerts.tolist(),
)
data = pyhf.tensorlib.astensor(observed_sample.tolist() + model.config.auxdata)
# Perform inference
fit_result = pyhf.infer.mle.fit(data, model, return_uncertainties=True)
bestfit_pars, par_uncerts = fit_result.T
print(
f"best fit parameters:\
\n * signal strength: {bestfit_pars[0]} +/- {par_uncerts[0]}\
\n * nuisance parameters: {bestfit_pars[1:]}\
\n * nuisance parameter uncertainties: {par_uncerts[1:]}"
)
# Perform hypothesis test scan
_start = 0.0
_stop = 5
_step = 0.1
poi_tests = np.arange(_start, _stop + _step, _step)
print("\nPerforming hypothesis tests\n")
hypo_tests = [
pyhf.infer.hypotest(
mu_test,
data,
model,
return_expected_set=True,
return_test_statistics=True,
qtilde=True,
)
for mu_test in poi_tests
]
# Upper limits on signal strength
results = invert_interval(poi_tests, hypo_tests)
print(f"Observed Limit on µ: {results['obs']:.2f}")
print("-----")
for idx, n_sigma in enumerate(np.arange(-2, 3)):
print(
"Expected {}Limit on µ: {:.3f}".format(
" " if n_sigma == 0 else "({} σ) ".format(n_sigma),
results["exp"][idx],
)
)
# Visualize the "Brazil band"
fig, ax = plt.subplots()
fig.set_size_inches(7, 5)
ax.set_title("Hypothesis Tests")
ax.set_ylabel(r"$\mathrm{CL}_{s}$")
ax.set_xlabel(r"$\mu$")
pyhf.contrib.viz.brazil.plot_results(ax, poi_tests, hypo_tests)
fig.savefig("brazil_band.png")
if __name__ == "__main__":
main()
which when run gives
(question) $ python answer.py
best fit parameters:
* signal strength: 1.5884737977889158 +/- 0.7803435235862329
* nuisance parameters: [0.99020988 1.06040191 0.90488207 1.03531383 1.09093327 1.00942088
1.07789316 1.01125627 1.06202964 0.95780043 0.94990993 1.04893286
1.0560711 0.9758487 0.93692481 1.04683181 1.05785515 0.92381263
0.93812855 0.96751869]
* nuisance parameter uncertainties: [0.06966439 0.07632218 0.0611428 0.07230328 0.07872258 0.06899675
0.07472849 0.07403246 0.07613661 0.08606657 0.08002775 0.08655314
0.07564512 0.07308117 0.06743479 0.07383134 0.07460864 0.06632003
0.06683251 0.06270965]
Performing hypothesis tests
/home/stackoverflow/.venvs/question/lib/python3.7/site-packages/pyhf/infer/calculators.py:229: RuntimeWarning: invalid value encountered in double_scalars
teststat = (qmu - qmu_A) / (2 * self.sqrtqmuA_v)
Observed Limit on µ: 2.89
-----
Expected (-2 σ) Limit on µ: 0.829
Expected (-1 σ) Limit on µ: 1.110
Expected Limit on µ: 1.542
Expected (1 σ) Limit on µ: 2.147
Expected (2 σ) Limit on µ: 2.882
Let us know if you have any further questions!
Basically, I have solved the heat equation for (x,y,t) and I want to show the variation of the temperature function with time.The program was written in Fortran 90 and the solution data was stored in a file diffeqn3D_file.txt.
This is the program:
Program diffeqn3D
Implicit none
Integer:: b,c,d,l,i,j,k,x,y,t
Real:: a,r,s,h,t1,k1,u,v,tt,p
Real,Dimension(0:500,0:500,0:500):: f1 !f=f(x,t)
!t1=time step and h=position step along x and
!k=position step along y and a=conductivity
open(7, file='diffeqn3D_file.txt', status='unknown')
a=0.024
t1=0.1
h=0.1
k1=0.1
r=(h**2)/(k1**2)
s=(h**2)/(a*t1)
l=10
tt=80.5
b=100
c=100
d=100
!The temperature is TT at x=0 and 0 at x=l.
!The rod is heated along the line x=0.
!Initial conditions to be changed as per problem..
Do x=0,b
Do y=0,c
Do t=0,d
If(x==0) Then
f1(x,y,t)=tt
Else If((x.ne.0).and.t==0) Then
f1(x,y,t)=0
End If
End Do
End Do
End Do
print *,f1(9,7,5)
print *,r
print *,a,h,t1,h**2,a*t1,(h**2)/(a*t1)
print *,f1(0,1,1)
print *,f1(3,1,1)
!num_soln_of_eqnwrite(7,*)
Do t=1,d
Do y=1,c-1
Do x=1,b-1
p=f1(x-1,y,t-1)+f1(x+1,y,t-1)+r*f1(x,y-1,t-1)+r*f1(x,y+1,t-1)-(2+2*r-s)*f1(x,y,t-1)
f1(x,y,t)=p/s
!f1(x,t)=0.5*(f1(x-1,t-1)+f1(x+1,t-1))
!print *,f1(x,t),b
End Do
End Do
End Do
Do i=0,d
Do k=0,b
Do j=0,c
u=k*h
v=j*k1
write(7,*) u,v,f1(k,j,i)
End Do
End Do
write(7,*) " "
write(7,*) " "
End Do
close(7)
End Program diffeqn3D
And after compilation and run, I enter the following code in gnuplot but it does not run, rather it hangs up or creates a gif picture, not animation.
set terminal gif animate delay 1
set output 'diffeqn3D.gif'
stats 'diffeqn3D_file.txt' nooutput
do for [i=1:int(STATS_blocks)] {
splot 'diffeqn3D_file.txt'
}
Sometimes it also puts up a warning message, citing no z-values for autoscale range.
What is wrong with my code and how should I proceed?
First, try to add some print commands for "debug" information:
set terminal gif animate delay 1
set output 'diffeqn3D.gif'
stats 'diffeqn3D_file.txt' nooutput
print int(STATS_blocks)
do for [i=1:int(STATS_blocks)] {
print i
splot 'diffeqn3D_file.txt'
}
Second, what happens?
The splot command does not have an index specifier, try to use:
splot 'diffeqn3D_file.txt' index i
Without the index i gnuplots always plots the whole file which has two consequences:
The data file is quite large. Plotting takes quite a long time and it seems that gnuplot hangs.
Gnuplot plots always the same data, there are no changes which show up in an animation.
Now gnuplot runs much faster and we will fix the autoscale error. Again, there are two points:
The index specifies a data set within the data file. The stats command counts those sets which "are separated by pairs of blank records" (from gnuplot documentation). Your data file ends with a pair of blank records - this starts a new data set in gnuplot. But this data set is empty which finally leads to the error. There are only STATS_blocks-1 data sets.
The index is zero based. The loop should start with 0 and end at STATS_blocks-2.
So we arrive at this plot command:
do for [i=0:int(STATS_blocks)-2] {
print i
splot 'diffeqn3D_file.txt' index i
}
I have a netcdf file with data as a function of lon,lat and time. I would like to calculate the total number of missing entries in each grid cell summed over the time dimension, preferably with CDO or NCO so I do not need to invoke R, python etc.
I know how to get the total number of missing values
ncap2 -s "nmiss=var.number_miss()" in.nc out.nc
as I answered to this related question:
count number of missing values in netcdf file - R
and CDO can tell me the total summed over space with
cdo info in.nc
but I can't work out how to sum over time. Is there a way for example of specifying the dimension to sum over with number_miss in ncap2?
We added the missing() function to ncap2 to solve this problem elegantly as of NCO 4.6.7 (May, 2017). To count missing values through time:
ncap2 -s 'mss_val=three_dmn_var_dbl.missing().ttl($time)' in.nc out.nc
Here ncap2 chains two methods together, missing(), followed by a total over the time dimension. The 2D variable mss_val is in out.nc. The response below does the same but averages over space and reports through time (because I misinterpreted the OP).
Old/obsolete answer:
There are two ways to do this with NCO/ncap2, though neither is as elegant as I would like. Either call assemble the answer one record at a time by calling num_miss() with one record at a time, or (my preference) use the boolean comparison function followed by the total operator along the axes of choice:
zender#aerosol:~$ ncap2 -O -s 'tmp=three_dmn_var_dbl;mss_val=tmp.get_miss();tmp.delete_miss();tmp_bool=(tmp==mss_val);tmp_bool_ttl=tmp_bool.ttl($lon,$lat);print(tmp_bool_ttl);' ~/nco/data/in.nc ~/foo.nc
tmp_bool_ttl[0]=0
tmp_bool_ttl[1]=0
tmp_bool_ttl[2]=0
tmp_bool_ttl[3]=8
tmp_bool_ttl[4]=0
tmp_bool_ttl[5]=0
tmp_bool_ttl[6]=0
tmp_bool_ttl[7]=1
tmp_bool_ttl[8]=0
tmp_bool_ttl[9]=2
or
zender#aerosol:~$ ncap2 -O -s 'for(rec=0;rec<time.size();rec++){nmiss=three_dmn_var_int(rec,:,:).number_miss();print(nmiss);}' ~/nco/data/in.nc ~/foo.nc
nmiss = 0
nmiss = 0
nmiss = 8
nmiss = 0
nmiss = 0
nmiss = 1
nmiss = 0
nmiss = 2
nmiss = 1
nmiss = 2
Even though you are asking for another solution, I would like to show you that it takes only one very short line to find the answer with the help of Python. The variable m_data has exactly the same shape as a variable with missing values read using the netCDF4 package. With the execution of only one np.sum command with the correct axis specified, you have your answer.
import numpy as np
import matplotlib.pyplot as plt
import netCDF4 as nc4
# Generate random data for this experiment.
data = np.random.rand(365, 64, 128)
# Masked data, this is how the data is read from NetCDF by the netCDF4 package.
# For this example, I mask all values less than 0.1.
m_data = np.ma.masked_array(data, mask=data<0.1)
# It only takes one operation to find the answer.
n_values_missing = np.sum(m_data.mask, axis=0)
# Just a plot of the result.
plt.figure()
plt.pcolormesh(n_values_missing)
plt.colorbar()
plt.xlabel('lon')
plt.ylabel('lat')
plt.show()
# Save a netCDF file of the results.
f = nc4.Dataset('test.nc', 'w', format='NETCDF4')
f.createDimension('lon', 128)
f.createDimension('lat', 64 )
n_values_missing_nc = f.createVariable('n_values_missing', 'i4', ('lat', 'lon'))
n_values_missing_nc[:,:] = n_values_missing[:,:]
f.close()