'numpy.ndarray' object has no attribute 'iloc' error in the Derivative Function - arguments

I have an issue when I am trying to parse parameters to a function. Below is a function which I ran successfully and got the results.
def tank(h,t):
c1=0.13
c2=0.20
Ac=2.0
qin=0.5
# c1=s2.iloc[0,0]
# c2=s2.iloc[1,0]
# Ac=s2.iloc[2,0]
# qin=s2.iloc[3,0]
print('c1: ',c1)
print('c2: ',c2)
print('Ac: ',Ac)
print('qin: ',qin)
qout1=c1*h[0]**0.5
qout2=c2*h[1]**0.5
dhdt1=(qin-qout1)/Ac
dhdt2=(qout1-qout2)/Ac
dhdt=[dhdt1,dhdt2]
return dhdt
#print(tank([0.25,0.5],0.5))
h0=[0,0]
t=np.linspace(0,10,21)
y=odeint(tank,h0,t)
print(y)
plt.plot(t,y[:,0],'b-')
plt.plot(t,y[:,1],'r--')
plt.show()
Now I modified the above function and tried to parse the initial constant values from another function, but now I am getting an error as "'numpy.ndarray' object has no attribute 'iloc'".
def para():
c1=0.13
c2=0.20
Ac=2.0
qin=0.5
h0=0
h1=0
s0 = pd.Series({'c1': c1, 'c2': c2, 'Ac': Ac, 'qin': qin,'h[0]':h0,'h[1]':h1})
return s0
s1=para()
def tank(parameters,t):
c1=parameters.iloc[0,0]
c2=parameters.iloc[1,0]
Ac=parameters.iloc[2,0]
qin=parameters.iloc[3,0]
h[0]=parameters.iloc[4,0]
h[1]=parameters.iloc[5,0]
qout1=c1*h[0]**0.5
qout2=c2*h[1]**0.5
dhdt1=(qin-qout1)/Ac
dhdt2=(qout1-qout2)/Ac
dhdt=[dhdt1,dhdt2]
return dhdt
t=np.linspace(0,10,21)
y=odeint(tank,s1,t,)
print(y)
plt.plot(t,y[:,0],'b-')
plt.plot(t,y[:,1],'r--')
plt.show()
Please be kind to help me to sort out this problem and simulate the function succeefully.

Related

Change all images in training set

I have a convolutional neural network. And I wanted to train it on images from the training set but first they should be wrapped with my function change(tensor, float) that takes in a tensor/image of the form [hight,width,3] and a float.
Batch size =4
loading data
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
shuffle=True, num_workers=2)
Cnn architecture
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
#size of inputs [4,3,32,32]
#size of labels [4]
inputs = change(inputs,0.1) <----------------------------
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs) #[4, 10]
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}')
running_loss = 0.0
print('Finished Training')
I am trying to apply the image function change but it gives an object error.
it there a quick way to fix it?
I am using a Julia function but it works completely fine with other objects. Error message:
JULIA: MethodError: no method matching copy(::PyObject)
Closest candidates are:
copy(!Matched::T) where T<:SHA.SHA3_CTX at /opt/julia-1.7.2/share/julia/stdlib/v1.7/SHA/src/types.jl:213
copy(!Matched::T) where T<:SHA.SHA2_CTX at /opt/julia-1.7.2/share/julia/stdlib/v1.7/SHA/src/types.jl:212
copy(!Matched::Number) at /opt/julia-1.7.2/share/julia/base/number.jl:113
I would recommend to put change function to transforms list, so you do data changes on transformation stage.
partial from functools will help you to fix number of arguments, like this:
from functools import partial
def change(input, float):
pass
# Use partial to fix number of params, such that change accepts only input
change_partial = partial(change, float=pass_float_value_here)
# Add change_partial to a list of transforms before or after converting to tensors
transforms = Compose([
RandomResizedCrop(img_size), # example
# Add change_partial here if it operates on PIL Image
change_partial,
ToTensor(), # convert to tensor
# Add change_partial here if it operates on torch tensors
change_partial,
])

TensorFlow - directly calling tf.function much faster than calling tf.function returned from wrapper

I am training a VAE (using federated learning, but that is not so important) and wanted to keep the loss and train functions simple to exchange. The initial approach was to have a tf.function as loss function and a tf.function as train function as follows:
#tf.function
def kl_reconstruction_loss(model, model_input, beta):
x, y = model_input
mean, logvar = model.encode(x, y)
z = model.reparameterize(mean, logvar)
x_logit = model.decode(z, y)
cross_ent = tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=x)
reconstruction_loss = tf.reduce_mean(tf.reduce_sum(cross_ent, axis=[1, 2, 3]), axis=0)
kl_loss = tf.reduce_mean(0.5 * tf.reduce_sum(tf.exp(logvar) + tf.square(mean) - 1. - logvar, axis=-1), axis=0)
loss = reconstruction_loss + beta * kl_loss
return loss, kl_loss, reconstruction_loss
#tf.function
def train_fn(model: tf.keras.Model, batch, optimizer, kl_beta):
"""Trains the model on a single batch.
Args:
model: The VAE model.
batch: A batch of inputs [images, labels] for the vae.
optimizer: The optimizer to train the model.
beta: Weighting of KL loss
Returns:
The loss.
"""
def vae_loss():
"""Does the forward pass and computes losses for the generator."""
# N.B. The complete pass must be inside loss() for gradient tracing.
return kl_reconstruction_loss(model, batch, kl_beta)
with tf.GradientTape() as tape:
loss, kl_loss, rc_loss = vae_loss()
grads = tape.gradient(loss, model.trainable_variables)
grads_and_vars = zip(grads, model.trainable_variables)
optimizer.apply_gradients(grads_and_vars)
return loss
For my dataset this results in an epoch duration of approx. 25 seconds. However, since I have to call those functions directly in my code, I would have to enter different ones if I would want to try out different loss/train functions.
So, alternatively, I followed https://github.com/google-research/federated/tree/master/gans and wrapped the loss function in a class and the train function in another function. Now I have:
class VaeKlReconstructionLossFns(AbstractVaeLossFns):
#tf.function
def vae_loss(self, model, model_input, labels, global_round):
# KL Reconstruction loss
mean, logvar = model.encode(model_input, labels)
z = model.reparameterize(mean, logvar)
x_logit = model.decode(z, labels)
cross_ent = tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=model_input)
reconstruction_loss = tf.reduce_mean(tf.reduce_sum(cross_ent, axis=[1, 2, 3]), axis=0)
kl_loss = tf.reduce_mean(0.5 * tf.reduce_sum(tf.exp(logvar) + tf.square(mean) - 1. - logvar, axis=-1), axis=0)
loss = reconstruction_loss + self._get_beta(global_round) * kl_loss
if model.losses:
loss += tf.add_n(model.losses)
return loss, kl_loss, reconstruction_loss
def create_train_vae_fn(
vae_loss_fns: vae_losses.AbstractVaeLossFns,
vae_optimizer: tf.keras.optimizers.Optimizer):
"""Create a function that trains VAE, binding loss and optimizer.
Args:
vae_loss_fns: Instance of gan_losses.AbstractVAELossFns interface,
specifying the VAE training loss.
vae_optimizer: Optimizer for training the VAE.
Returns:
Function that executes one step of VAE training.
"""
# We check that the optimizer has not been used previously, which ensures
# that when it is bound the train fn isn't holding onto a different copy of
# the optimizer variables then the copy that is being exchanged b/w server and
# clients.
if vae_optimizer.variables():
raise ValueError(
'Expected vae_optimizer to not have been used previously, but '
'variables were already initialized.')
#tf.function
def train_vae_fn(model: tf.keras.Model,
model_inputs,
labels,
global_round,
new_optimizer_state=None):
"""Trains the model on a single batch.
Args:
model: The VAE model.
model_inputs: A batch of inputs (usually images) for the VAE.
labels: A batch of labels corresponding to the inputs.
global_round: The current glob al FL round for beta calculation
new_optimizer_state: A possible optimizer state to overwrite the current one with.
Returns:
The number of examples trained on.
The loss.
The updated optimizer state.
"""
def vae_loss():
"""Does the forward pass and computes losses for the generator."""
# N.B. The complete pass must be inside loss() for gradient tracing.
return vae_loss_fns.vae_loss(model, model_inputs, labels, global_round)
# Set optimizer vars
optimizer_state = get_optimizer_state(vae_optimizer)
if new_optimizer_state is not None:
# if optimizer is uninitialised, initialise vars
try:
tf.nest.assert_same_structure(optimizer_state, new_optimizer_state)
except ValueError:
initialize_optimizer_vars(vae_optimizer, model)
optimizer_state = get_optimizer_state(vae_optimizer)
tf.nest.assert_same_structure(optimizer_state, new_optimizer_state)
tf.nest.map_structure(lambda a, b: a.assign(b), optimizer_state, new_optimizer_state)
with tf.GradientTape() as tape:
loss, kl_loss, rc_loss = vae_loss()
grads = tape.gradient(loss, model.trainable_variables)
grads_and_vars = zip(grads, model.trainable_variables)
vae_optimizer.apply_gradients(grads_and_vars)
return tf.shape(model_inputs)[0], loss, optimizer_state
return train_vae_fn
This new formulation takes about 86 seconds per epoch.
I am struggling to understand why the second version performs so much worse than the first one. Does anyone have a good explanation for this?
Thanks in advance!
EDIT: My Tensorflow version is 2.5.0

Tensorflow Dataset Interleave Image File Paths for Reading

I am trying to use the tensorflow Dataset API to load tif images that can be on network storage. The map_func I pass to the tensorflow.Dataset.interleave function is not passed the string filename I expect, but instead a tensor with dtype string.
I have tried evaluating this tensor using sess.run and tensor.eval() (also passing in the current session as the session parameter), but tensorflow raises a ValueError: "ValueError: Fetch argument cannot be interpreted as a Tensor. (Tensor Tensor("arg0:0", shape=(), dtype=string) is not an element of this graph.)
or("arg0:0", shape=(), dtype=string) is not an element of this graph.)".
An example of my tensorflow data pipeline
class DataLoader:
...
def setup(self):
...
tf.data.Dataset.from_tensor_slices(
(
self.training_filenames, # a python list of strings
self.training_label_filenames # a python list of strings
)
)
.apply(tf.data.experimental.filter_for_shard(
self.shard_count,
self.shard_index))
.repeat()
.shuffle(buffer_size=self.training_data_shuffle_buffer_size)
.interleave(
lambda data_filepath, label_filepath: (
self.preprocess_training_data(data_filepath, label_filepath)
),
cycle_length=tf.data.experimental.AUTOTUNE,
num_parallel_calls=tf.data.experimental.AUTOTUNE
)
.batch(self.training_data_batch_size)
.prefetch(self.training_data_batch_size)
...
def preprocess_training_data(self, data_filepath, label_filepath):
data = tifffile.imread(self.session.run(data_filepath).decode())
data_resize = (self.training_data_shape[0], self.training_data_shape[1])
data_transpose = (1, 0, 2)
data_scale = 255.0
data_dtype = self.training_data_type.as_numpy_dtype()
data = numpy.transpose(
cv2.resize(data, data_resize), data_transpose
).astype(data_dtype) / data_scale
label = tifffile.imread(self.session.run(label_filepath))
label = numpy.transpose(
numpy.expand_dims(
cv2.resize(
label,
data_resize),
2
),
data_transpose
)
weights = [
self.vec_class_weights(current_label)
.astype(self.label_data_type.as_numpy_dtype())
for current_label
in label
]
return data, label, weights
I expect my preprocess_training_data function to be passed strings, or I would be able to evaluate the tensor my function is passed from the dataset interleave transformation which would evaluate to a string.

Using multiple validation sets with keras

I am training a model with keras using the model.fit() method.
I would like to use multiple validation sets that should be validated on separately after each training epoch so that i get one loss value for each validation set. If possible they should be both displayed during training and as well be returned by the keras.callbacks.History() callback.
I am thinking of something like this:
history = model.fit(train_data, train_targets,
epochs=epochs,
batch_size=batch_size,
validation_data=[
(validation_data1, validation_targets1),
(validation_data2, validation_targets2)],
shuffle=True)
I currently have no idea how to implement this. Is it possible to achieve this by writing my own Callback? Or how else would you approach this problem?
I ended up writing my own Callback based on the History callback to solve the problem. I'm not sure if this is the best approach but the following Callback records losses and metrics for the training and validation set like the History callback as well as losses and metrics for additional validation sets passed to the constructor.
class AdditionalValidationSets(Callback):
def __init__(self, validation_sets, verbose=0, batch_size=None):
"""
:param validation_sets:
a list of 3-tuples (validation_data, validation_targets, validation_set_name)
or 4-tuples (validation_data, validation_targets, sample_weights, validation_set_name)
:param verbose:
verbosity mode, 1 or 0
:param batch_size:
batch size to be used when evaluating on the additional datasets
"""
super(AdditionalValidationSets, self).__init__()
self.validation_sets = validation_sets
for validation_set in self.validation_sets:
if len(validation_set) not in [3, 4]:
raise ValueError()
self.epoch = []
self.history = {}
self.verbose = verbose
self.batch_size = batch_size
def on_train_begin(self, logs=None):
self.epoch = []
self.history = {}
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
self.epoch.append(epoch)
# record the same values as History() as well
for k, v in logs.items():
self.history.setdefault(k, []).append(v)
# evaluate on the additional validation sets
for validation_set in self.validation_sets:
if len(validation_set) == 3:
validation_data, validation_targets, validation_set_name = validation_set
sample_weights = None
elif len(validation_set) == 4:
validation_data, validation_targets, sample_weights, validation_set_name = validation_set
else:
raise ValueError()
results = self.model.evaluate(x=validation_data,
y=validation_targets,
verbose=self.verbose,
sample_weight=sample_weights,
batch_size=self.batch_size)
for metric, result in zip(self.model.metrics_names,results):
valuename = validation_set_name + '_' + metric
self.history.setdefault(valuename, []).append(result)
which i am then using like this:
history = AdditionalValidationSets([(validation_data2, validation_targets2, 'val2')])
model.fit(train_data, train_targets,
epochs=epochs,
batch_size=batch_size,
validation_data=(validation_data1, validation_targets1),
callbacks=[history]
shuffle=True)
I tested this on TensorFlow 2 and it worked. You can evaluate on as many validation sets as you want at the end of each epoch:
class MyCustomCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
res_eval_1 = self.model.evaluate(X_test_1, y_test_1, verbose = 0)
res_eval_2 = self.model.evaluate(X_test_2, y_test_2, verbose = 0)
print(res_eval_1)
print(res_eval_2)
And later:
my_val_callback = MyCustomCallback()
# Your model creation code
model.fit(..., callbacks=[my_val_callback])
Considering the current keras docs, you can pass callbacks to evaluate and evaluate_generator. So you can call evaluate multiple times with different datasets.
I have not tested it, so I am happy if you comment your experiences with it below.

How to overcome simulation of ode issue in R?

I#m trying to solve and then optimise the parameter and initial condition values in the system of differential equations. However, I can't run the code due to error messages (this code was working fine when the FS was given as an equation). Due to the limited data points, I want to be able to optimise against E values the whole system (calculated using equation F). I've made the code work till the LLode function, so optimisation doesn't take place at all.
I've managed to solve the error thanks to forum replies (Lyngbakr):
initSim<-ode(initCond, tspan, hormonesode, params, method="ode45",atol = 1e-10, rtol = 1e-10,hmax=NULL,maxsteps=100000000000000000)
Warning message:
In rk(y, times, func, parms, method = "ode45", ...) :
Number of time steps 1 exceeded maxsteps at t = 0
I will appreciate any hint.
Malgosia
My code looks as follow:
tspan<-c(0,1,2,3,4,5)#,6,7,8,9,16,17,18,19)
E<-c(0.303,0.205,0.205,0.381,0.272,0.188)#,0.317,0.274,0.106,0.161,0.947,2.722,4.701,0.24)
names(df)=c("tspan","E")
require(deSolve)
#initial parameter values
#k1<-1.062169#0.370 7.754
k2<-1.908891#-0.00727284 0.022
#k3<-0.321
k4<-2.14
k5<-10.7
#k6<-1.07
A0<-1.38 #15.47
B0<-0.61 #0.298
C0<-0.28
#F0<-0.303#3.28803757 3.434
#define a weight vector outside LLODE function
wts<-sqrt(sqrt(E))
#combine parameters into a vector
params<-c(k2,k4,k5,A0,B0,C0)
names(params)<-c("k2","k4","k5","A0","B0","C0")
#ode function
hormonesode<-function(t,x,params){
A<-x[1]
B<-x[2]
C<-x[3]
D<-x[4]
E<-x[5]
F<-x[6]
# k1<-params[1]
k2<-params[1]
#k3<-params[3]
k4<-params[2]
k5<-params[3]
A0<-params[4]
B0<-params[5]
C0<-params[6]
P<-3.02796-3.1867*cos(0.314159*t)-0.55332*cos(2*0.314159*t)+0.362678*cos(3*0.314159*t)+0.486708*cos(4*0.314159*t)-0.10133*cos(5*0.314159*t)-0.21977*cos(6*0.314159*t)-0.08926*cos(7*0.314159*t)+0.222292*cos(8*0.314159*t)-1.05119*sin(0.314159*t)+0.855633*sin(2*0.314159*t)+0.176677*sin(3*0.314159*t)-0.05658*sin(4*0.314159*t)-0.34108*sin(5*0.314159*t)-0.15718*sin(6*0.314159*t)+0.397642*sin(7*0.314159*t)-0.0986*sin(8*0.314159*t)
FS<-0.1944+0.002017*cos(0.261799*t)+0.009603*cos(2*0.261799*t)+0.01754*cos(3*0.261799*t)+0.106208*cos(4*0.261799*t)+0.020423*cos(5*0.261799*t)+0.015417*cos(6*0.261799*t)+0.01079*cos(7*0.261799*t)+0.115042*cos(8*0.261799*t)+0.008853*sin(0.261799*t)+0.013523*sin(2*0.261799*t)+0.012254*sin(3*0.261799*t)+0.026053*sin(4*0.261799*t)+0.000957*sin(5*0.261799*t)-0.001*sin(6*0.261799*t)+0.002374*sin(7*0.261799*t)+0.026775*sin(8*0.261799*t)
dA<-1.0621*1/(1+(1/5*P)^5)*(1/3*(F^10)/(1+(F)*1/3)^10)-2.14*A;
dB<-75*(((A/5)^10)/(1+(A/5)^10))-8.56*B;
dC<-1.909*FS+0.321*FS*C- 0.749*C;
dD<-0.749*C- 0.749*D+0.214*FS*D^2;
dE<-0.749*D-0.749*E+0.214*B*E^2;
dF<-k2 + k4*D + k5*E-1.07*F;
output<-c(dA, dB, dC, dD, dE, dF)
list(output)
}
#Initial conditions
A0<-2500#1.0038#2.794
B0<-105#25.0061#6.13
C0<-0.018#0.02#0.06126
D0<-0
E0<-0
F0<-0.303#3.28803757 3.434
initCond<-c(A0, B0, C0, D0, E0, F0)
initCond
#run ode with initial guesses
initSim<-ode(initCond, tspan, hormonesode, params, method="ode45",atol = 1e-1, rtol = 1e-1,hmax=NULL, maxsteps=100000)
plot(tspan,initSim[,7], type="l", lty="dashed")
points(tspan,E)
initSim
LLode<-function(params){
A0<-params[4]
B0<-params[5]
C0<-params[6]
D0<-0
E0<-0
F0<-0.303
initCond<-c(A0, B0, C0, D0, E0, F0)
#Run the ODE
odeOut<-ode(initCond,tspan,hormonesode,params,method="ode45",atol = 1e-1, rtol = 1e-1,hmax=NULL, maxsteps=100000)
if(attr(odeOut,"istate")[1]!=0){
#Check if satisfactory 2 indicates perceived success of method 'lsoda', 0 for
#'ode45'. Other integrators may have different indicator codes
cat("Integration possibly failed\n")
LL<-.Machine$double.xmax*1e-06#indicate failure
}else{
y<-odeOut[,7] #measurement variable
wtDiff1<-(y-E)*wts#weighted difference
LL<-as.numeric(crossprod(wtDiff1))#Sum of squares
}
LL
}
# optimize uzing optim()
MLoptres<-optim(params,LLode,method="Nelder-Mead",
control=list(trace=0,maxit=500))
MLoptres
require(optimx)
optxres<-optimx(params,LLode,lower=rep(0,6),upper=rep(200,6),
method=c("nmkb","bobyqa"),
control=list(usenumDeriv=TRUE, maxit=500,trace=0))
summary(optxres,order=value)
bestpar<-coef(summary(optxres,order=value)[1,])
cat("best parameters:")
print(bestpar)
#dput(optxres,file='includes/C20bestpar.dput')
bpSim<-ode(initCond,tspan,hormonesode,bestpar,method="ode45")
X11()
plot(tspan,initSim[,7],type="l",lty="dashed")
points(tspan,E)
#points(tspan,MLoptres[,]/k,type="l",lty="twodash")
points(tspan,bpSim[,7],type="l")
title(main="Improved fit using optimx")

Resources