I want to implement my project by two steps: 1. training the network using some data; 2. turning the trained network using some other data.
For the first step (training the network), I have got a not bad result. But, for the second step (turning the network), a problem happen: the parameter do not update. More details is given:
My loss includes two things: 1. the normal cost for my project. 2. the L2 regularization item. Giving as follow:
c1 = y_conv - y_
c2 = tf.square(c1)
c3 = tf.reduce_sum(c2,1)
c4 = tf.sqrt(c3)
cost = tf.reduce_mean(c4)
regular = 0.0001*( tf.nn.l2_loss(w_conv1) + tf.nn.l2_loss(b_conv1) +\
tf.nn.l2_loss(w_conv2) + tf.nn.l2_loss(b_conv2) +\
tf.nn.l2_loss(w_conv3) + tf.nn.l2_loss(b_conv3) +\
tf.nn.l2_loss(w_conv4) + tf.nn.l2_loss(b_conv4) +\
tf.nn.l2_loss(w_fc1) + tf.nn.l2_loss(b_fc1) +\
tf.nn.l2_loss(w_fc2) + tf.nn.l2_loss(b_fc2) )
loss = regular + cost
When tuning the network, I print the loss, cost and L2 item:
Epoch: 1 || loss = 0.184248179 || cost = 0.181599200 || regular = 0.002648979
Epoch: 2 || loss = 0.184086733 || cost = 0.181437753 || regular = 0.002648979
Epoch: 3 || loss = 0.184602532 || cost = 0.181953552 || regular = 0.002648979
Epoch: 4 || loss = 0.184308948 || cost = 0.181659969 || regular = 0.002648979
Epoch: 5 || loss = 0.184251788 || cost = 0.181602808 || regular = 0.002648979
Epoch: 6 || loss = 0.184105504 || cost = 0.181456525 || regular = 0.002648979
Epoch: 7 || loss = 0.184241678 || cost = 0.181592699 || regular = 0.002648979
Epoch: 8 || loss = 0.184189570 || cost = 0.181540590 || regular = 0.002648979
Epoch: 9 || loss = 0.184390061 || cost = 0.181741081 || regular = 0.002648979
Epoch: 10 || loss = 0.184064055 || cost = 0.181415075 || regular = 0.002648979
Epoch: 11 || loss = 0.184323867 || cost = 0.181674888 || regular = 0.002648979
Epoch: 12 || loss = 0.184519534 || cost = 0.181870555 || regular = 0.002648979
Epoch: 13 || loss = 0.183869445 || cost = 0.181220466 || regular = 0.002648979
Epoch: 14 || loss = 0.184313927 || cost = 0.181664948 || regular = 0.002648979
Epoch: 15 || loss = 0.184198738 || cost = 0.181549759 || regular = 0.002648979
As we can see, the L2 item do not update, but the cost and loss update. In order to check whether the parameters of network update, I calculate the value:
gs, lr, solver, l, c, r, pY, bconv1 = sess.run([global_step, learning_rate, train, loss, cost, regular, y_conv, b_conv1], feed_dict={x: batch_X, y_: batch_Y, keep_prob:0.5})
So the bconv1 is one part parameters, and I am confirm that the bconv1 do not update between two epoch.
I am very confused that why the cost/loss update, while the parameters of network do not update.
And the whole code except the CNN layers is:
c1 = y_conv - y_
c2 = tf.square(c1)
c3 = tf.reduce_sum(c2,1)
c4 = tf.sqrt(c3)
cost = tf.reduce_mean(c4)
regular = 0.0001*( tf.nn.l2_loss(w_conv1) + tf.nn.l2_loss(b_conv1) +\
tf.nn.l2_loss(w_conv2) + tf.nn.l2_loss(b_conv2) +\
tf.nn.l2_loss(w_conv3) + tf.nn.l2_loss(b_conv3) +\
tf.nn.l2_loss(w_conv4) + tf.nn.l2_loss(b_conv4) +\
tf.nn.l2_loss(w_fc1) + tf.nn.l2_loss(b_fc1) +\
tf.nn.l2_loss(w_fc2) + tf.nn.l2_loss(b_fc2) )
loss = regular + cost
global_step = tf.Variable(0, trainable=False)
initial_learning_rate = 0.001
learning_rate = tf.train.exponential_decay(initial_learning_rate,
global_step=global_step,
decay_steps=int( X.shape[0]/1000 ),decay_rate=0.99, staircase=True)
train = tf.train.AdamOptimizer(learning_rate).minimize(loss,global_step=global_step)
batch_size = 1000
init = tf.initialize_all_variables()
saver = tf.train.Saver()
sess = tf.Session()
sess.run(init)
saver.restore(sess,'../TrainingData/convParameters.ckpt')
total_batch = int( X.shape[0]/batch_size )
for epoch in range(1000):
for i in range(total_batch):
batch_X = X[i*batch_size:(i+1)*batch_size]
batch_Y = Y[i*batch_size:(i+1)*batch_size]
gs, lr, solver, l, c, r, pY, bconv1 = sess.run([global_step, learning_rate, train, loss, cost, regular, y_conv, b_conv1], feed_dict={x: batch_X, y_: batch_Y, keep_prob:0.5})
print("Epoch: %5d || loss = %.9f || cost = %.9f || regular = %.9f"%(epoch+1,L/total_batch,Mcost/total_batch,Reg/total_batch))
Any suggestion is important for me. Thank you in advance.
zhang qiang
Actually, I thought I figure out this problem, but I am not. I just know what result in this bug.
The reason why the parameter do not update is that the global_step is very large after the pre-training so that the learning rate is very small (about 1e-24). So, what I should do is to set the global_step to 0 after restore the network parameters. Also, the learning rate should also be setted agin.
The code should look like:
saver.restore(sess,'../TrainingData/convParameters.ckpt')
global_step = tf.Variable(0, trainable=False)
learning_rate = tf.train.exponential_decay(initial_learning_rate,
global_step=global_step,
decay_steps=int( X.shape[0]/1000 ),decay_rate=0.99, staircase=True)
Then, you can fetch the value of global_step and learning rate to check whether it is ok:
gafter,lrafter = sess.run([global_step,learning_rate])
It must be done after restore the network parameters.
I though I solved this bug by the above code. However, the global_step do not update when training.
what I have done are:
Reset the optimizer, just like:
global_step = tf.Variable(0, trainable=False)
learning_rate = tf.train.exponential_decay(initial_learning_rate,
global_step=global_step,
decay_steps=int( X.shape[0]/1000 ),decay_rate=0.99, staircase=True)
train = tf.train.AdamOptimizer(learning_rate).minimize(loss,global_step=global_step)
global_step_init = tf.initialize_variables([global_step])
sess.run(global_step_init)
But I was told I am using the uninitialized variable.
Initial the optimizer:
global_step_init = tf.initialize_variables([global_step, train])
I was told that the train can not be initialized.
I am so exhausted. Finally, I give up. I just set the learning rate as a placeholder, just like: enter link description here
If some body have the solution, please tell me. Thanks a lot.
Related
I am trying to fit measured data with lmfit.
My goal is to get the parameters of the capacitor with an equivalent circuit diagram.
So, I want to create a model with parameters (C, R1, L1,...) and fit it to the measured data.
I know that the resonance frequency is at the global minimum and there must also be R1. Also known is C.
So I could fix the parameter C and R1. With the resonance frequency I could calculate L1 too.
I created the model, but the fit doesn't work right.
Maybe someone could help me with this.
Thanks in advance.
from lmfit import minimize, Parameters
from lmfit import report_fit
params = Parameters()
params.add('C', value = 220e-9, vary = False)
params.add('L1', value = 0.00001, min = 0, max = 0.1)
params.add('R1', value = globalmin, vary = False)
params.add('Rp', value = 10000, min = 0, max = 10e20)
params.add('Cp', value = 0.1, min = 0, max = 0.1)
def get_elements(params, freq, data):
C = params['C'].value
L1 = params['L1'].value
R1 = params['R1'].value
Rp = params['Rp'].value
Cp = params['Cp'].value
XC = 1/(1j*2*np.pi*freq*C)
XL = 1j*2*np.pi*freq*L1
XP = 1/(1j*2*np.pi*freq*Cp)
Z1 = R1 + XC*Rp/(XC+Rp) + XL
real = np.real(Z1*XP/(Z1+XP))
imag = np.imag(Z1*XP/(Z1+XP))
model = np.sqrt(real**2 + imag**2)
#model = np.sqrt(R1**2 + ((2*np.pi*freq*L1 - 1/(2*np.pi*freq*C))**2))
#model = (np.arctan((2*np.pi*freq*L1 - 1/(2*np.pi*freq*C))/R1)) * 360/((2*np.pi))
return data - model
out = minimize(get_elements, params , args=(freq, data))
report_fit(out)
#make reconstruction for plotting
C = out.params['C'].value
L1 = out.params['L1'].value
R1 = out.params['R1'].value
Rp = out.params['Rp'].value
Cp = out.params['Cp'].value
XC = 1/(1j*2*np.pi*freq*C)
XL = 1j*2*np.pi*freq*L1
XP = 1/(1j*2*np.pi*freq*Cp)
Z1 = R1 + XC*Rp/(XC+Rp) + XL
real = np.real(Z1*XP/(Z1+XP))
imag = np.imag(Z1*XP/(Z1+XP))
reconst = np.sqrt(real**2 + imag**2)
reconst_phase = np.arctan(imag/real)* 360/(2*np.pi)
'''
PLOTTING
'''
#plot of filtred signal vs measered data (AMPLITUDE)
fig = plt.figure(figsize=(40,15))
file_title = 'Measured Data'
plt.subplot(311)
plt.xscale('log')
plt.yscale('log')
plt.xlim([min(freq), max(freq)])
plt.ylabel('Amplitude')
plt.xlabel('Frequency in Hz')
plt.grid(True, which="both")
plt.plot(freq, z12_fac, 'g', alpha = 0.7, label = 'data')
#Plot Impedance of model in magenta
plt.plot(freq, reconst, 'm', label='Reconstruction (Model)')
plt.legend()
#(PHASE)
plt.subplot(312)
plt.xscale('log')
plt.xlim([min(freq), max(freq)])
plt.ylabel('Phase in °')
plt.xlabel('Frequency in Hz')
plt.grid(True, which="both")
plt.plot(freq, z12_deg, 'g', alpha = 0.7, label = 'data')
#Plot Phase of model in magenta
plt.plot(freq, reconst_phase, 'm', label='Reconstruction (Model)')
plt.legend()
plt.savefig(file_title)
plt.close(fig)
measured data
equivalent circuit diagram (model)
Edit 1:
Fit-Report:
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 28
# data points = 4001
# variables = 3
chi-square = 1197180.70
reduced chi-square = 299.444897
Akaike info crit = 22816.4225
Bayesian info crit = 22835.3054
## Warning: uncertainties could not be estimated:
L1: at initial value
Rp: at boundary
Cp: at initial value
Cp: at boundary
[[Variables]]
C: 2.2e-07 (fixed)
L1: 1.0000e-05 (init = 1e-05)
R1: 0.06375191 (fixed)
Rp: 0.00000000 (init = 10000)
Cp: 0.10000000 (init = 0.1)
Edit 2:
Data can be found here:
https://1drv.ms/u/s!AsLKp-1R8HlZhcdlJER5T7qjmvfmnw?e=r8G2nN
Edit 3:
I now have simplified my model to a simple RLC-series. With a another set of data this works pretty good. see here the plot with another set of data
def get_elements(params, freq, data):
C = params['C'].value
L1 = params['L1'].value
R1 = params['R1'].value
#Rp = params['Rp'].value
#Cp = params['Cp'].value
#k = params['k'].value
#freq = np.log10(freq)
XC = 1/(1j*2*np.pi*freq*C)
XL = 1j*2*np.pi*freq*L1
# XP = 1/(1j*2*np.pi*freq*Cp)
# Z1 = R1*k + XC*Rp/(XC+Rp) + XL
# real = np.real(Z1*XP/(Z1+XP))
# imag = np.imag(Z1*XP/(Z1+XP))
Z1 = R1 + XC + XL
real = np.real(Z1)
imag= np.imag(Z1)
model = np.sqrt(real**2 + imag**2)
return np.sqrt(np.real(data)**2+np.imag(data)**2) - model
out = minimize(get_elements, params , args=(freq, data))
Report:
Chi-Square is really high...
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 25
# data points = 4001
# variables = 2
chi-square = 5.0375e+08
reduced chi-square = 125968.118
Akaike info crit = 46988.8798
Bayesian info crit = 47001.4684
[[Variables]]
C: 3.3e-09 (fixed)
L1: 5.2066e-09 +/- 1.3906e-08 (267.09%) (init = 1e-05)
R1: 0.40753691 +/- 24.5685882 (6028.56%) (init = 0.05)
[[Correlations]] (unreported correlations are < 0.100)
C(L1, R1) = -0.174
With my originally set of data I get this:
plot original data (complex)
Which is not bad, but also not good. That's why I want to make my model more detailed, so I can fit also in higher frequency regions...
Report of this one:
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 25
# data points = 4001
# variables = 2
chi-square = 109156.170
reduced chi-square = 27.2958664
Akaike info crit = 13232.2473
Bayesian info crit = 13244.8359
[[Variables]]
C: 2.2e-07 (fixed)
L1: 2.3344e-08 +/- 1.9987e-10 (0.86%) (init = 1e-05)
R1: 0.17444702 +/- 0.29660571 (170.03%) (init = 0.05)
Please note: I also have changed the input data of the model. Now I give the model complex values and then I calculate the Amplitude. Find this also here: https://1drv.ms/u/s!AsLKp-1R8HlZhcdlJER5T7qjmvfmnw?e=qnrZk1
I've been training a U-Net for single class small lesion segmentation, and have been getting consistently volatile validation loss. I have about 20k images split 70/30 between training and validation sets-so I don't think the issue is too little data. I've tried shuffling and resplitting the sets a few times with no change in volatility-so I don't think the validation set is unrepresentative. I have tried lowering the learning rate with no effect on volatility. And I have tried a few loss functions (dice coefficient, focal tversky, weighted binary cross-entropy). I'm using a decent amount of augmentation so as to avoid overfitting. I've also run through all my data (512x512 float64s with corresponding 512x512 int64 masks--both stored as numpy arrays) do double check that the value range, dtypes, etc. aren't screwy...and I even removed any ROIs in the masks under 35 pixels in area which I thought might be artifact and messing with loss.
I'm using keras ImageDataGen.flow_from_directory...I was initially using zca_whitening and brightness_range augmentation but I think this causes issues with flow_from_directory and the link between mask and image being lost.. so I skipped this.
I've tried validation generators with and without shuffle=True. Batch size is 8.
Here's some of my code, happy to include more if it would help:
# loss
from keras.losses import binary_crossentropy
import keras.backend as K
import tensorflow as tf
epsilon = 1e-5
smooth = 1
def dsc(y_true, y_pred):
smooth = 1.
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
score = (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
return score
def dice_loss(y_true, y_pred):
loss = 1 - dsc(y_true, y_pred)
return loss
def bce_dice_loss(y_true, y_pred):
loss = binary_crossentropy(y_true, y_pred) + dice_loss(y_true, y_pred)
return loss
def confusion(y_true, y_pred):
smooth=1
y_pred_pos = K.clip(y_pred, 0, 1)
y_pred_neg = 1 - y_pred_pos
y_pos = K.clip(y_true, 0, 1)
y_neg = 1 - y_pos
tp = K.sum(y_pos * y_pred_pos)
fp = K.sum(y_neg * y_pred_pos)
fn = K.sum(y_pos * y_pred_neg)
prec = (tp + smooth)/(tp+fp+smooth)
recall = (tp+smooth)/(tp+fn+smooth)
return prec, recall
def tp(y_true, y_pred):
smooth = 1
y_pred_pos = K.round(K.clip(y_pred, 0, 1))
y_pos = K.round(K.clip(y_true, 0, 1))
tp = (K.sum(y_pos * y_pred_pos) + smooth)/ (K.sum(y_pos) + smooth)
return tp
def tn(y_true, y_pred):
smooth = 1
y_pred_pos = K.round(K.clip(y_pred, 0, 1))
y_pred_neg = 1 - y_pred_pos
y_pos = K.round(K.clip(y_true, 0, 1))
y_neg = 1 - y_pos
tn = (K.sum(y_neg * y_pred_neg) + smooth) / (K.sum(y_neg) + smooth )
return tn
def tversky(y_true, y_pred):
y_true_pos = K.flatten(y_true)
y_pred_pos = K.flatten(y_pred)
true_pos = K.sum(y_true_pos * y_pred_pos)
false_neg = K.sum(y_true_pos * (1-y_pred_pos))
false_pos = K.sum((1-y_true_pos)*y_pred_pos)
alpha = 0.7
return (true_pos + smooth)/(true_pos + alpha*false_neg + (1-alpha)*false_pos + smooth)
def tversky_loss(y_true, y_pred):
return 1 - tversky(y_true,y_pred)
def focal_tversky(y_true,y_pred):
pt_1 = tversky(y_true, y_pred)
gamma = 0.75
return K.pow((1-pt_1), gamma)
model = BlockModel((len(os.listdir(os.path.join(imageroot,'train_ct','train'))), 512, 512, 1),filt_num=16,numBlocks=4)
#model.compile(optimizer=Adam(learning_rate=0.001), loss=weighted_cross_entropy)
#model.compile(optimizer=Adam(learning_rate=0.001), loss=dice_coef_loss)
model.compile(optimizer=Adam(learning_rate=0.001), loss=focal_tversky)
train_mask = os.path.join(imageroot,'train_masks')
val_mask = os.path.join(imageroot,'val_masks')
model.load_weights(model_weights_path) #I'm initializing with some pre-trained weights from a similar model
data_gen_args_mask = dict(
rotation_range=10,
shear_range=20,
width_shift_range=0.1,
height_shift_range=0.1,
zoom_range=[0.8,1.2],
horizontal_flip=True,
#vertical_flip=True,
fill_mode='nearest',
data_format='channels_last'
)
data_gen_args = dict(
**data_gen_args_mask
)
image_datagen_train = ImageDataGenerator(**data_gen_args)
mask_datagen_train = ImageDataGenerator(**data_gen_args)#_mask)
image_datagen_val = ImageDataGenerator()
mask_datagen_val = ImageDataGenerator()
seed = 1
BS = 8
steps = int(np.floor((len(os.listdir(os.path.join(train_ct,'train'))))/BS))
print(steps)
val_steps = int(np.floor((len(os.listdir(os.path.join(val_ct,'val'))))/BS))
print(val_steps)
train_image_generator = image_datagen_train.flow_from_directory(
train_ct,
target_size = (512, 512),
color_mode = ("grayscale"),
classes=None,
class_mode=None,
seed = seed,
shuffle = True,
batch_size = BS)
train_mask_generator = mask_datagen_train.flow_from_directory(
train_mask,
target_size = (512, 512),
color_mode = ("grayscale"),
classes=None,
class_mode=None,
seed = seed,
shuffle = True,
batch_size = BS)
val_image_generator = image_datagen_val.flow_from_directory(
val_ct,
target_size = (512, 512),
color_mode = ("grayscale"),
classes=None,
class_mode=None,
seed = seed,
shuffle = True,
batch_size = BS)
val_mask_generator = mask_datagen_val.flow_from_directory(
val_mask,
target_size = (512, 512),
color_mode = ("grayscale"),
classes=None,
class_mode=None,
seed = seed,
shuffle = True,
batch_size = BS)
train_generator = zip(train_image_generator, train_mask_generator)
val_generator = zip(val_image_generator, val_mask_generator)
# make callback for checkpointing
plot_losses = PlotLossesCallback(skip_first=0,plot_extrema=False)
%matplotlib inline
filepath = os.path.join(versionPath, model_version + "_saved-model-{epoch:02d}-{val_loss:.2f}.hdf5")
if reduce:
cb_check = [ModelCheckpoint(filepath,monitor='val_loss',
verbose=1,save_best_only=False,
save_weights_only=True,mode='auto',period=1),
reduce_lr,
plot_losses]
else:
cb_check = [ModelCheckpoint(filepath,monitor='val_loss',
verbose=1,save_best_only=False,
save_weights_only=True,mode='auto',period=1),
plot_losses]
# train model
history = model.fit_generator(train_generator, epochs=numEp,
steps_per_epoch=steps,
validation_data=val_generator,
validation_steps=val_steps,
verbose=1,
callbacks=cb_check,
use_multiprocessing = False
)
And here's how my loss looks:
Another potentially relevant thing: I tweaked the flow_from_directory code a bit (added npy to the white list). But training loss looks fine so assuming the issue isnt here
Two suggestions:
Switch to the classic validation data format (i.e. numpy array) instead of using a generator -- this will ensure you always use the exactly same validation data every time. If you see a different validation curve, then there is something "random" in the validation generator giving you different data at different epochs.
Use a fixed set of samples (100 or 1000 should be enough w/o any data augmentation) for both training and validation. If everything goes well, you should see your network quickly overfit to this dataset and your training and validation curves should very much similar. If not, debug your network.
Does this code have mutation, selection, and crossover, just like the original genetic algorithm.
Since this, a hybrid algorithm (i.e PSO with GA) does it use all steps of original GA or skips some
of them.Please do tell me.
I am just new to this and still trying to understand. Thank you.
%%% Hybrid GA and PSO code
function [gbest, gBestScore, all_scores] = QAP_PSO_GA(CreatePopFcn, FitnessFcn, UpdatePosition, ...
nCity, nPlant, nPopSize, nIters)
% Set algorithm parameters
constant = 0.95;
c1 = 1.5; %1.4944; %2;
c2 = 1.5; %1.4944; %2;
w = 0.792 * constant;
% Allocate memory and initialize
gBestScore = inf;
all_scores = inf * ones(nPopSize, nIters);
x = CreatePopFcn(nPopSize, nCity);
v = zeros(nPopSize, nCity);
pbest = x;
% update lbest
cost_p = inf * ones(1, nPopSize); %feval(FUN, pbest');
for i=1:nPopSize
cost_p(i) = FitnessFcn(pbest(i, 1:nPlant));
end
lbest = update_lbest(cost_p, pbest, nPopSize);
for iter = 1 : nIters
if mod(iter,1000) == 0
parents = randperm(nPopSize);
for i = 1:nPopSize
x(i,:) = (pbest(i,:) + pbest(parents(i),:))/2;
% v(i,:) = pbest(parents(i),:) - x(i,:);
% v(i,:) = (v(i,:) + v(parents(i),:))/2;
end
else
% Update velocity
v = w*v + c1*rand(nPopSize,nCity).*(pbest-x) + c2*rand(nPopSize,nCity).*(lbest-x);
% Update position
x = x + v;
x = UpdatePosition(x);
end
% Update pbest
cost_x = inf * ones(1, nPopSize);
for i=1:nPopSize
cost_x(i) = FitnessFcn(x(i, 1:nPlant));
end
s = cost_x<cost_p;
cost_p = (1-s).*cost_p + s.*cost_x;
s = repmat(s',1,nCity);
pbest = (1-s).*pbest + s.*x;
% update lbest
lbest = update_lbest(cost_p, pbest, nPopSize);
% update global best
all_scores(:, iter) = cost_x;
[cost,index] = min(cost_p);
if (cost < gBestScore)
gbest = pbest(index, :);
gBestScore = cost;
end
% draw current fitness
figure(1);
plot(iter,min(cost_x),'cp','MarkerEdgeColor','k','MarkerFaceColor','g','MarkerSize',8)
hold on
str=strcat('Best fitness: ', num2str(min(cost_x)));
disp(str);
end
end
% Function to update lbest
function lbest = update_lbest(cost_p, x, nPopSize)
sm(1, 1)= cost_p(1, nPopSize);
sm(1, 2:3)= cost_p(1, 1:2);
[cost, index] = min(sm);
if index==1
lbest(1, :) = x(nPopSize, :);
else
lbest(1, :) = x(index-1, :);
end
for i = 2:nPopSize-1
sm(1, 1:3)= cost_p(1, i-1:i+1);
[cost, index] = min(sm);
lbest(i, :) = x(i+index-2, :);
end
sm(1, 1:2)= cost_p(1, nPopSize-1:nPopSize);
sm(1, 3)= cost_p(1, 1);
[cost, index] = min(sm);
if index==3
lbest(nPopSize, :) = x(1, :);
else
lbest(nPopSize, :) = x(nPopSize-2+index, :);
end
end
If you are new to Optimization, I recommend you first to study each algorithm separately, then you may study how GA and PSO maybe combined, Although you must have basic mathematical skills in order to understand the operators of the two algorithms and in order to test the efficiency of these algorithm (this is what really matter).
This code chunk is responsible for parent selection and crossover:
parents = randperm(nPopSize);
for i = 1:nPopSize
x(i,:) = (pbest(i,:) + pbest(parents(i),:))/2;
% v(i,:) = pbest(parents(i),:) - x(i,:);
% v(i,:) = (v(i,:) + v(parents(i),:))/2;
end
Is not really obvious how selection randperm is done (I have no experience about Matlab).
And this is the code that is responsible for updating the velocity and position of each particle:
% Update velocity
v = w*v + c1*rand(nPopSize,nCity).*(pbest-x) + c2*rand(nPopSize,nCity).*(lbest-x);
% Update position
x = x + v;
x = UpdatePosition(x);
This version of velocity updating strategy is utilizing what is called Interia-Weight W, which basically mean we are preserving the velocity history of each particle (not completely recomputing it).
It worth mentioning that velocity updating is done more often than crossover (each 1000 iteration).
As a civil engineer, I am working on a program to find the equilibrium of a concrete reinforced section submitted to a Flexural Moment.
Reinforced Concrete Cross Section Equilibrium:
Basically, I have 2 unknowns, which are eps_sup and eps_inf
I have a constant that is M
I have some variables that depend only on the values of (eps_sup,eps_inf). The functions are non-linear, no need to go into this.
When I have the right couple of values, the following equations are verified :
Fc + Fs = 0 (Forces Equilibrium)
M/z = Fc = -Fs (Moment Equilibrium)
My algorithm, as it is today, consists in finding the minimal value of : abs(Fc+Fs)/Fc + abs(M_calc-M)/M
To do this I iterate on Both e eps_sup and eps_inf between given limits, with a given step, and the step needs to be small enough to find a solution.
It is working, but it is very (very) slow since it goes through a very wide range of values without trying to reduce the number of iterations.
Surely I can find an optimized solution, and that is were I need your help.
'Constants :
M
'Variables :
delta = 10000000000000
eps_sup = 0
eps_inf = 0
M_calc = 0
Fc = 0
Fs = 0
z = 0
eps_sup_candidate = 0
eps_inf_candidate = 0
For eps_sup = 0 to 0,005 step = 0,000001
For eps_inf = -0,05 to 0 step = 0,000001
Fc = f(eps_sup,eps_inf)
Fs = g(eps_sup,eps_inf)
z = h(eps_sup,eps_inf)
M_calc = Fc * z
If (abs(Fc+Fs)/Fc + abs(M_calc-M)/M) < delta Then
delta = abs(Fc+Fs)/Fc + abs(M_calc-M)/M
eps_sup_candidate = eps_sup
eps_inf_candidate = eps_inf
End If
Next
Next
Recently I found this in some code I wrote a few years ago. It was used to rationalize a real value (within a tolerance) by determining a suitable denominator and then checking if the difference between the original real and the rational was small enough.
Edit to clarify : I actually don't want to convert all real values. For instance I could choose a max denominator of 14, and a real value that equals 7/15 would stay as-is. It's not as clear that as it's an outside variable in the algorithms I wrote here.
The algorithm to get the denominator was this (pseudocode):
denominator(x)
frac = fractional part of x
recip = 1/frac
if (frac < tol)
return 1
else
return recip * denominator(recip)
end
end
Seems to be based on continued fractions although it became clear on looking at it again that it was wrong. (It worked for me because it would eventually just spit out infinity, which I handled outside, but it would be often really slow.) The value for tol doesn't really do anything except in the case of termination or for numbers that end up close. I don't think it's relatable to the tolerance for the real - rational conversion.
I've replaced it with an iterative version that is not only faster but I'm pretty sure it won't fail theoretically (d = 1 to start with and fractional part returns a positive, so recip is always >= 1) :
denom_iter(x d)
return d if d > maxd
frac = fractional part of x
recip = 1/frac
if (frac = 0)
return d
else
return denom_iter(recip d*recip)
end
end
What I'm curious to know if there's a way to pick the maxd that will ensure that it converts all values that are possible for a given tolerance. I'm assuming 1/tol but don't want to miss something. I'm also wondering if there's an way in this approach to actually limit the denominator size - this allows some denominators larger than maxd.
This can be considered a 2D minimization problem on error:
ArgMin ( r - q / p ), where r is real, q and p are integers
I suggest the use of Gradient Descent algorithm . The gradient in this objective function is:
f'(q, p) = (-1/p, q/p^2)
The initial guess r_o can be q being the closest integer to r, and p being 1.
The stopping condition can be thresholding of the error.
The pseudo-code of GD can be found in wiki: http://en.wikipedia.org/wiki/Gradient_descent
If the initial guess is close enough, the objective function should be convex.
As Jacob suggested, this problem can be better solved by minimizing the following error function:
ArgMin ( p * r - q ), where r is real, q and p are integers
This is linear programming, which can be efficiently solved by any ILP (Integer Linear Programming) solvers. GD works on non-linear cases, but lack efficiency in linear problems.
Initial guesses and stopping condition can be similar to stated above. Better choice can be obtained for individual choice of solver.
I suggest you should still assume convexity near the local minimum, which can greatly reduce cost. You can also try Simplex method, which is great on linear programming problem.
I give credit to Jacob on this.
A problem similar to this is solved in the Approximations section beginning ca. page 28 of Bill Gosper's Continued Fraction Arithmetic document. (Ref: postscript file; also see text version, from line 1984.) The general idea is to compute continued-fraction approximations of the low-end and high-end range limiting numbers, until the two fractions differ, and then choose a value in the range of those two approximations. This is guaranteed to give a simplest fraction, using Gosper's terminology.
The python code below (program "simpleden") implements a similar process. (It probably is not as good as Gosper's suggested implementation, but is good enough that you can see what kind of results the method produces.) The amount of work done is similar to that for Euclid's algorithm, ie O(n) for numbers with n bits, so the program is reasonably fast. Some example test cases (ie the program's output) are shown after the code itself. Note, function simpleratio(vlo, vhi) as shown here returns -1 if vhi is smaller than vlo.
#!/usr/bin/env python
def simpleratio(vlo, vhi):
rlo, rhi, eps = vlo, vhi, 0.0000001
if vhi < vlo: return -1
num = denp = 1
nump = den = 0
while 1:
klo, khi = int(rlo), int(rhi)
if klo != khi or rlo-klo < eps or rhi-khi < eps:
tlo = denp + klo * den
thi = denp + khi * den
if tlo < thi:
return tlo + (rlo-klo > eps)*den
elif thi < tlo:
return thi + (rhi-khi > eps)*den
else:
return tlo
nump, num = num, nump + klo * num
denp, den = den, denp + klo * den
rlo, rhi = 1/(rlo-klo), 1/(rhi-khi)
def test(vlo, vhi):
den = simpleratio(vlo, vhi);
fden = float(den)
ilo, ihi = int(vlo*den), int(vhi*den)
rlo, rhi = ilo/fden, ihi/fden;
izok = 'ok' if rlo <= vlo <= rhi <= vhi else 'wrong'
print '{:4d}/{:4d} = {:0.8f} vlo:{:0.8f} {:4d}/{:4d} = {:0.8f} vhi:{:0.8f} {}'.format(ilo,den,rlo,vlo, ihi,den,rhi,vhi, izok)
test (0.685, 0.695)
test (0.685, 0.7)
test (0.685, 0.71)
test (0.685, 0.75)
test (0.685, 0.76)
test (0.75, 0.76)
test (2.173, 2.177)
test (2.373, 2.377)
test (3.484, 3.487)
test (4.0, 4.87)
test (4.0, 8.0)
test (5.5, 5.6)
test (5.5, 6.5)
test (7.5, 7.3)
test (7.5, 7.5)
test (8.534537, 8.534538)
test (9.343221, 9.343222)
Output from program:
> ./simpleden
8/ 13 = 0.61538462 vlo:0.68500000 9/ 13 = 0.69230769 vhi:0.69500000 ok
6/ 10 = 0.60000000 vlo:0.68500000 7/ 10 = 0.70000000 vhi:0.70000000 ok
6/ 10 = 0.60000000 vlo:0.68500000 7/ 10 = 0.70000000 vhi:0.71000000 ok
2/ 4 = 0.50000000 vlo:0.68500000 3/ 4 = 0.75000000 vhi:0.75000000 ok
2/ 4 = 0.50000000 vlo:0.68500000 3/ 4 = 0.75000000 vhi:0.76000000 ok
3/ 4 = 0.75000000 vlo:0.75000000 3/ 4 = 0.75000000 vhi:0.76000000 ok
36/ 17 = 2.11764706 vlo:2.17300000 37/ 17 = 2.17647059 vhi:2.17700000 ok
18/ 8 = 2.25000000 vlo:2.37300000 19/ 8 = 2.37500000 vhi:2.37700000 ok
114/ 33 = 3.45454545 vlo:3.48400000 115/ 33 = 3.48484848 vhi:3.48700000 ok
4/ 1 = 4.00000000 vlo:4.00000000 4/ 1 = 4.00000000 vhi:4.87000000 ok
4/ 1 = 4.00000000 vlo:4.00000000 8/ 1 = 8.00000000 vhi:8.00000000 ok
11/ 2 = 5.50000000 vlo:5.50000000 11/ 2 = 5.50000000 vhi:5.60000000 ok
5/ 1 = 5.00000000 vlo:5.50000000 6/ 1 = 6.00000000 vhi:6.50000000 ok
-7/ -1 = 7.00000000 vlo:7.50000000 -7/ -1 = 7.00000000 vhi:7.30000000 wrong
15/ 2 = 7.50000000 vlo:7.50000000 15/ 2 = 7.50000000 vhi:7.50000000 ok
8030/ 941 = 8.53347503 vlo:8.53453700 8031/ 941 = 8.53453773 vhi:8.53453800 ok
24880/2663 = 9.34284641 vlo:9.34322100 24881/2663 = 9.34322193 vhi:9.34322200 ok
If, rather than the simplest fraction in a range, you seek the best approximation given some upper limit on denominator size, consider code like the following, which replaces all the code from def test(vlo, vhi) forward.
def smallden(target, maxden):
global pas
pas = 0
tol = 1/float(maxden)**2
while 1:
den = simpleratio(target-tol, target+tol);
if den <= maxden: return den
tol *= 2
pas += 1
# Test driver for smallden(target, maxden) routine
import random
totalpass, trials, passes = 0, 20, [0 for i in range(20)]
print 'Maxden Num Den Num/Den Target Error Passes'
for i in range(trials):
target = random.random()
maxden = 10 + round(10000*random.random())
den = smallden(target, maxden)
num = int(round(target*den))
got = float(num)/den
print '{:4d} {:4d}/{:4d} = {:10.8f} = {:10.8f} + {:12.9f} {:2}'.format(
int(maxden), num, den, got, target, got - target, pas)
totalpass += pas
passes[pas-1] += 1
print 'Average pass count: {:0.3}\nPass histo: {}'.format(
float(totalpass)/trials, passes)
In production code, drop out all the references to pas (etc.), ie, drop out pass-counting code.
The routine smallden is given a target value and a maximum value for allowed denominators. Given maxden possible choices of denominators, it's reasonable to suppose that a tolerance on the order of 1/maxden² can be achieved. The pass-counts shown in the following typical output (where target and maxden were set via random numbers) illustrate that such a tolerance was reached immediately more than half the time, but in other cases tolerances 2 or 4 or 8 times as large were used, requiring extra calls to simpleratio. Note, the last two lines of output from a 10000-number test run are shown following the complete output of a 20-number test run.
Maxden Num Den Num/Den Target Error Passes
1198 32/ 509 = 0.06286837 = 0.06286798 + 0.000000392 1
2136 115/ 427 = 0.26932084 = 0.26932103 + -0.000000185 1
4257 839/2670 = 0.31423221 = 0.31423223 + -0.000000025 1
2680 449/ 509 = 0.88212181 = 0.88212132 + 0.000000486 3
2935 440/1853 = 0.23745278 = 0.23745287 + -0.000000095 1
6128 347/1285 = 0.27003891 = 0.27003899 + -0.000000077 3
8041 1780/4243 = 0.41951449 = 0.41951447 + 0.000000020 2
7637 3926/7127 = 0.55086292 = 0.55086293 + -0.000000010 1
3422 27/ 469 = 0.05756930 = 0.05756918 + 0.000000113 2
1616 168/1507 = 0.11147976 = 0.11147982 + -0.000000061 1
260 62/ 123 = 0.50406504 = 0.50406378 + 0.000001264 1
3775 52/3327 = 0.01562970 = 0.01562750 + 0.000002195 6
233 6/ 13 = 0.46153846 = 0.46172772 + -0.000189254 5
3650 3151/3514 = 0.89669892 = 0.89669890 + 0.000000020 1
9307 2943/7528 = 0.39094049 = 0.39094048 + 0.000000013 2
962 206/ 225 = 0.91555556 = 0.91555496 + 0.000000594 1
2080 564/1975 = 0.28556962 = 0.28556943 + 0.000000190 1
6505 1971/2347 = 0.83979548 = 0.83979551 + -0.000000022 1
1944 472/ 833 = 0.56662665 = 0.56662696 + -0.000000305 2
3244 291/1447 = 0.20110574 = 0.20110579 + -0.000000051 1
Average pass count: 1.85
Pass histo: [12, 4, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
The last two lines of output from a 10000-number test run:
Average pass count: 1.77
Pass histo: [56659, 25227, 10020, 4146, 2072, 931, 497, 233, 125, 39, 33, 17, 1, 0, 0, 0, 0, 0, 0, 0]