NDsolve problem in set of equations in mathematica - wolfram-mathematica

I have written this set of dynamic equations for my problem related to
magnetized targets in fusion. I just want to know why Mathematica
cannot solve it.
I have done this "NDSolve" before and i have taken good answers. but when I change my formulas it cannot solve, what is the problem. I can sendyou the entire code.
`bal = {(3/2)*ne[t]*k*Te'[t] == \[Eta]d*wd + wie - wb +
f\[Alpha]*\[Eta]f*wf - whe, (3/2)*ni[t]*k*
Ti'[t] == (1 - \[Eta]d)*wd - wie + f\[Alpha]*(1 - \[Eta]f)*wf -
whi, nd'[t] = -nd[t]*nT[t]*\[Sigma],
nT'[t] = -nd[t]*nT[t]*\[Sigma],
n\[Alpha]'[t] = nd[t]*nT[t]*\[Sigma], Te[0] = 1, Ti[0] = 1,
nd[0] == nT[0] == \!\(TraditionalForm\`
\*FractionBox[\(1.4447999999999998`*^26\), \(2\)]\), n\[Alpha][0] = 0}
sol = NDSolve[bal, {Te, Ti, nd, nT, n\[Alpha]}, t]
here is the error.
NDSolve::deqn: Equation or list of equations expected instead of -
((2.85474*10^-12 E^(-19.983 ((1<<1>>Plus[<<3>>]<<1>>Power<<1>>
<<1>>])/Ti[t])^(1/3)) nd[t] nT[t])/(Ti[t]^(2/3) (1-(15.136 Ti[t]+4.6064
Ti[<<1>>]^2-0.10675 Ti[<<1>>]^3)/(1000+75.189 Ti[<<1>>]+13.5
Power[<<2>>]+0.01366 Power[<<2>>]))^(5/6))) in the first argument
{4.8*10^-9 nd[t] (Te^\[Prime])[t]==900000000000000000-8.70051*10^-25
(nd[t]+nT[t])^2 Sqrt[Te[t]]-(5.2266*10^46 <<1>>^<<1>> (11.92
+1.69505*10^-9 <<1>>^3))/((nd[t]+nT[t]) (3.77 +<<21>>
<<1>>+1.32084*10^-19 Power[<<2>>]))+(8.7331*10^17 (24-Log[Times[<<4>>]])
nd[t]^2 (-Te[t]+Ti[t]))/(1.09626*10^24 Te[<<1>>]+5.97059*10^20
<<1>>)^(3/2)+(5.152*10^-16 E^(-19.983 Times[<<2>>]^(1/3)) nd[t] nT[t]
(8/3 (4.32916*10^-7+Times[<<3>>])+64/9 Plus[<<2>>]^2))/((1+104/27
Plus[<<2>>]+64/9 Power[<<2>>]) (32+Te[t]) Ti[t]^(2/3) (1-Plus[<<3>>]
Power[<<2>>])^(5/6)),<<7>>,0}.`enter code here`

I change some = to == and I assign values to all the unknown variables and get rid of the TraditionalForm wrapper to turn that into an ordinary fraction
k=1;\[Eta]d=1;wd=1;wie=1;wb=1;f\[Alpha]=1;\[Eta]f=1;wf=1;whe=1;whi=1;\[Sigma]=1;
ne[t_]:=2t+1;ni[t_]:=3t+2;
bal = {(3/2)*ne[t]*k*Te'[t] == \[Eta]d*wd + wie - wb + f\[Alpha]*\[Eta]f*wf - whe,
(3/2)*ni[t]*k*Ti'[t] == (1 - \[Eta]d)*wd - wie + f\[Alpha]*(1 - \[Eta]f)*wf - whi,
nd'[t] == -nd[t]*nT[t]*\[Sigma],
nT'[t] == -nd[t]*nT[t]*\[Sigma],
n\[Alpha]'[t] == nd[t]*nT[t]*\[Sigma],
Te[0] == 1,
Ti[0] == 1,
nd[0] == nT[0] == 1.4447999999999998`*^26/2,
n\[Alpha][0] == 0};
sol = NDSolve[bal, {Te,Ti,nd,nT,n\[Alpha]}, {t,0,1}];
Plot[{Te[t],Ti[t]}/.sol[[1]],{t,0,1}]
Now substitute your actual values for all those variables and your actual functions for ne[t] and ni[t] and see what you get.

Related

Mathematica Series and Solve function

This is my first mathmatica code,
I defined the functions:
\[Beta] := v/c
\[Gamma] := 1/Sqrt[1 - \[Beta]^2]
TotalE[\[Gamma][\[Beta]]] := \[Gamma]mc^2
KE := TotalE[\[Gamma][\[Beta]]] - mc^2
No i want to make a series expansion of KE at β → 0 up to order 2,
I tried:
Series[KE, {\[Beta], 1, 2}]
But i got the error massage:
General::ivar: v/c is not a valid variable.
I also wanted to define Ekin as function of β,
so i used Solve function to get the inverse function, β[Ekin]:
Solve[KE, \[Beta]]
The same errors arises again:
Solve::ivar: v/c is not a valid variable.
Try this
Clear[\[Gamma],\[Beta],mc,KE,s,v,c]
\[Gamma] = 1/Sqrt[1 - \[Beta]^2];
TotalE[\[Gamma]*\[Beta]] = \[Gamma]*mc^2;
KE = TotalE[\[Gamma]*\[Beta]] - mc^2;
s=Normal[Series[KE, {\[Beta], 1, 2}]]/.\[Beta]->v/c
Reduce[KE==0, \[Beta]]/.\[Beta]->v/c
which returns
O-mc^2 + mc^2/(Sqrt[2]*Sqrt[1 - v/c]) -
(mc^2*(-1 + v/c))/(4*Sqrt[2]*Sqrt[1 - v/c]) +
(3*mc^2*(-1 + v/c)^2)/(32*Sqrt[2]*Sqrt[1 - v/c])
and
(mc != 0 && v/c == 0)||(-1+v^2/c^2 !=0 && mc == 0)
What that is trying to do is do your calculations with the simple variable beta, before you turn that into v/c and after the calculations replace beta with v/c.
But there are still things about the way you have written that which worry me. You are kind of writing TotalE like it is a function, but that is not the way to define a Mathematica function and I am concerned this may be going to get you into trouble.
Please let me know if I have misunderstood some of what you are trying to do and explain what I've done wrong and I will try to find a way to fix that.

How to do parallel processing in pytorch

I am working on a deep learning problem. I am solving it using pytorch. I have two GPU's which are on the same machine (16273MiB,12193MiB). I want to use both the GPU's for my training (video dataset).
I get a warning:
There is an imbalance between your GPUs. You may want to exclude GPU 1 which
has less than 75% of the memory or cores of GPU 0. You can do so by setting
the device_ids argument to DataParallel, or by setting the CUDA_VISIBLE_DEVICES
environment variable.
warnings.warn(imbalance_warn.format(device_ids[min_pos], device_ids[max_pos]))
I also get an error:
raise TypeError('Broadcast function not implemented for CPU tensors')
TypeError: Broadcast function not implemented for CPU tensors
if __name__ == '__main__':
opt.scales = [opt.initial_scale]
for i in range(1, opt.n_scales):
opt.scales.append(opt.scales[-1] * opt.scale_step)
opt.arch = '{}-{}'.format(opt.model, opt.model_depth)
opt.mean = get_mean(opt.norm_value)
opt.std = get_std(opt.norm_value)
print("opt",opt)
with open(os.path.join(opt.result_path, 'opts.json'), 'w') as opt_file:
json.dump(vars(opt), opt_file)
torch.manual_seed(opt.manual_seed)
model, parameters = generate_model(opt)
#print(model)
pytorch_total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
print("Total number of trainable parameters: ", pytorch_total_params)
# Define Class weights
if opt.weighted:
print("Weighted Loss is created")
if opt.n_finetune_classes == 2:
weight = torch.tensor([1.0, 3.0])
else:
weight = torch.ones(opt.n_finetune_classes)
else:
weight = None
criterion = nn.CrossEntropyLoss()
if not opt.no_cuda:
criterion = nn.DataParallel(criterion.cuda())
if opt.no_mean_norm and not opt.std_norm:
norm_method = Normalize([0, 0, 0], [1, 1, 1])
elif not opt.std_norm:
norm_method = Normalize(opt.mean, [1, 1, 1])
else:
norm_method = Normalize(opt.mean, opt.std)
train_loader = torch.utils.data.DataLoader(
training_data,
batch_size=opt.batch_size,
shuffle=True,
num_workers=opt.n_threads,
pin_memory=True)
train_logger = Logger(
os.path.join(opt.result_path, 'train.log'),
['epoch', 'loss', 'acc', 'precision','recall','lr'])
train_batch_logger = Logger(
os.path.join(opt.result_path, 'train_batch.log'),
['epoch', 'batch', 'iter', 'loss', 'acc', 'precision', 'recall', 'lr'])
if opt.nesterov:
dampening = 0
else:
dampening = opt.dampening
optimizer = optim.SGD(
parameters,
lr=opt.learning_rate,
momentum=opt.momentum,
dampening=dampening,
weight_decay=opt.weight_decay,
nesterov=opt.nesterov)
# scheduler = lr_scheduler.ReduceLROnPlateau(
# optimizer, 'min', patience=opt.lr_patience)
if not opt.no_val:
spatial_transform = Compose([
Scale(opt.sample_size),
CenterCrop(opt.sample_size),
ToTensor(opt.norm_value), norm_method
])
print('run')
for i in range(opt.begin_epoch, opt.n_epochs + 1):
if not opt.no_train:
adjust_learning_rate(optimizer, i, opt.lr_steps)
train_epoch(i, train_loader, model, criterion, optimizer, opt,
train_logger, train_batch_logger)
I have also made changes in my train file:
model = nn.DataParallel(model(),device_ids=[0,1]).cuda()
outputs = model(inputs)
It does not seem to work properly and is giving error. Please advice, I am new to pytorch.
Thanks
As mentioned in this link, you have to do model.cuda() before passing it to nn.DataParallel.
net = nn.DataParallel(model.cuda(), device_ids=[0,1])
https://github.com/pytorch/pytorch/issues/17065

Confused about the use of validation set here

For the main.py of the px2graph project, the part of training and validation is shown as below:
splits = [s for s in ['train', 'valid'] if opt.iters[s] > 0]
start_round = opt.last_round - opt.num_rounds
# Main training loop
for round_idx in range(start_round, opt.last_round):
for split in splits:
print("Round %d: %s" % (round_idx, split))
loader.start_epoch(sess, split, train_flag, opt.iters[split] * opt.batchsize)
flag_val = split == 'train'
for step in tqdm(range(opt.iters[split]), ascii=True):
global_step = step + round_idx * opt.iters[split]
to_run = [sample_idx, summaries[split], loss, accuracy]
if split == 'train': to_run += [optim]
# Do image summaries at the end of each round
do_image_summary = step == opt.iters[split] - 1
if do_image_summary: to_run[1] = image_summaries[split]
# Start with lower learning rate to prevent early divergence
t = 1/(1+np.exp(-(global_step-5000)/1000))
lr_start = opt.learning_rate / 15
lr_end = opt.learning_rate
tmp_lr = (1-t) * lr_start + t * lr_end
# Run computation graph
result = sess.run(to_run, feed_dict={train_flag:flag_val, lr:tmp_lr})
out_loss = result[2]
out_accuracy = result[3]
if sum(out_loss) > 1e5:
print("Loss diverging...exiting before code freezes due to NaN values.")
print("If this continues you may need to try a lower learning rate, a")
print("different optimizer, or a larger batch size.")
return
time_str = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
print("{}: step {}, loss {:g}, acc {:g}".format(time_str, global_step, out_loss, out_accuracy))
# Log data
if split == 'valid' or (split == 'train' and step % 20 == 0) or do_image_summary:
writer.add_summary(result[1], global_step)
writer.flush()
# Save training snapshot
saver.save(sess, 'exp/' + opt.exp_id + '/snapshot')
with open('exp/' + opt.exp_id + '/last_round', 'w') as f:
f.write('%d\n' % round_idx)
It seems that the author only get the result of each batch of the validation set. I am wondering, if I want to observe whether the model is improving or reaching the best performance, should I use the result on the whole validation set?
If the validation set is small enough, we could calculate the loss, accuracy on the whole validation set during training to observe the performance. However, if the validation set is too large, it is better to calculate batch-wise validation results and for multiple steps.

AutoIt - Find duplicate images by content?

I am looking for a way to find duplicate images using AutoIt. I've looked into PixelSearch and SearchImage but neither do exactly what I need them to do.
I am trying to compare 2 images by filename and see if they are the same image (a duplicate). The best way I've thought to do it would be to:
1) Get both image sizes in pixels
2) Use a while loop to get the color of each pixel and store it in an array
3) Check to see if both arrays are equal to each other.
Does anybody have any ideas on how to achieve this?
I just did some more research on this subject and built a small UDF based on a few answers I read. (Mainly based off of monoceres's answer on AutoItScript.com). I figured I would post my solution here to help any future developers!
CompareImagesUDF.au3
Func _CompareImages($ciImageOne, $ciImageTwo)
_GDIPlus_Startup()
$fname1=$ciImageOne
If $fname1="" Then Exit
$fname2=$ciImageTwo
If $fname2="" Then Exit
$bm1 = _GDIPlus_ImageLoadFromFile($fname1)
$bm2 = _GDIPlus_ImageLoadFromFile($fname2)
; MsgBox(0, "bm1==bm2", CompareBitmaps($bm1, $bm2))
Return CompareBitmaps($bm1, $bm2)
_GDIPlus_ImageDispose($bm1)
_GDIPlus_ImageDispose($bm2)
_GDIPlus_Shutdown()
EndFunc
Func CompareBitmaps($bm1, $bm2)
$Bm1W = _GDIPlus_ImageGetWidth($bm1)
$Bm1H = _GDIPlus_ImageGetHeight($bm1)
$BitmapData1 = _GDIPlus_BitmapLockBits($bm1, 0, 0, $Bm1W, $Bm1H, $GDIP_ILMREAD, $GDIP_PXF32RGB)
$Stride = DllStructGetData($BitmapData1, "Stride")
$Scan0 = DllStructGetData($BitmapData1, "Scan0")
$ptr1 = $Scan0
$size1 = ($Bm1H - 1) * $Stride + ($Bm1W - 1) * 4
$Bm2W = _GDIPlus_ImageGetWidth($bm2)
$Bm2H = _GDIPlus_ImageGetHeight($bm2)
$BitmapData2 = _GDIPlus_BitmapLockBits($bm2, 0, 0, $Bm2W, $Bm2H, $GDIP_ILMREAD, $GDIP_PXF32RGB)
$Stride = DllStructGetData($BitmapData2, "Stride")
$Scan0 = DllStructGetData($BitmapData2, "Scan0")
$ptr2 = $Scan0
$size2 = ($Bm2H - 1) * $Stride + ($Bm2W - 1) * 4
$smallest = $size1
If $size2 < $smallest Then $smallest = $size2
$call = DllCall("msvcrt.dll", "int:cdecl", "memcmp", "ptr", $ptr1, "ptr", $ptr2, "int", $smallest)
_GDIPlus_BitmapUnlockBits($bm1, $BitmapData1)
_GDIPlus_BitmapUnlockBits($bm2, $BitmapData2)
Return ($call[0]=0)
EndFunc ;==>CompareBitmaps
Now to compare imagages, all you have to do is include the CompareImagesUDF.au3 file and call the function.
CompareImagesExample.au3
#Include "CompareImagesUDF.au3"
; Define the two images (They can be different file formats)
$img1 = "Image1.jpg"
$img2 = "Image2.jpg"
; Compare the two images
$duplicateCheck = _CompareImages($img1, $img2)
MsgBox(0,"Is Duplicate?", $duplicateCheck)
If you want to find out if both images are an exact match, regardless if names are the same or different, use the built-in Crypt function _Crypt_HashFile with MD2 or MD5 to make a hash of both files and compare that.

How to speed up MATLAB integration?

I have the following code:
function [] = Solver( t )
%pre-declaration
foo=[1,1,1];
fooCell = num2cell(foo);
[q, val(q), star]=fooCell{:};
%functions used in prosomoiwsh
syms q val(q) star;
qd1=symfun(90*pi/180+30*pi/180*cos(q),q);
qd2=symfun(90*pi/180+30*pi/180*sin(q),q);
p1=symfun(79*pi/180*exp(-1.25*q)+pi/180,q);
p2=symfun(79*pi/180*exp(-1.25*q)+pi/180,q);
e1=symfun(val-qd1,q);
e2=symfun(val-qd2,q);
T1=symfun(log(-(1+star)/star),star);
T2=symfun(log(star/(1-star)),star);
%anonymous function handles
lambda=[0.75;10.494441313222076];
calcEVR_handles={#(t,x)[double(subs(diff(subs(T1,star,e1/p1),q)+subs(lambda(1)*T1,star,e1/p1),{diff(val,q);val;q},{x(2);x(1);t})),double(subs(diff(subs(T1,star,e1/p1),q)+subs(lambda(1)*T1,star,e1/p1),{diff(val,q);val;q},{0;x(1);t})),double(subs(double(subs(subs(diff(T1,star),star,e1/p1),{val;q},{x(1);t}))/p1,q,t))];#(t,x)[double(subs(diff(subs(T2,star,e2/p2),q)+subs(lambda(2)*T2,star,e2/p2),{diff(val,q);val;q},{x(4);x(3);t})),double(subs(diff(subs(T2,star,e2/p2),q)+subs(lambda(2)*T2,star,e2/p2),{diff(val,q);val;q},{0;x(3);t})),double(subs(double(subs(subs(diff(T2,star),star,e2/p2),{val;q},{x(3);t}))/p2,q,t))]};
options = odeset('AbsTol',1e-1,'RelTol',1e-1);
[T,x_r] = ode23(#prosomoiwsh,[0 t],[80*pi/180;0;130*pi/180;0;2.4943180186983711;11.216948999754299],options);
save newresult T x_r
function dx_th = prosomoiwsh(t,x_th)
%declarations
k=0.80773938740480955;
nf=6.2860930902603602;
hGa=0.16727117784664769;
hGb=0.010886618389781832;
dD=0.14062935253218495;
s=0.64963817519705203;
IwF={[4.5453398382686956 5.2541234145178066 -6.5853972592002235 7.695225990702979];[-4.4358339284697337 -8.1138542053372298 -8.2698210582548395 3.9739729629084071]};
IwG={[5.7098975358444752 4.2470526600975802 -0.83412489434697168 0.53829395964565041] [1.8689492167233894 -0.0015017513794517434 8.8666804106266461 -1.0775021663921467];[6.9513235639494155 -0.8133752392893685 7.4032432556804162 3.1496138243338709] [5.8037182454981568 2.0933267947187457 4.852362963697928 -0.10745559204132382]};
IbF={-1.2165533594615545;7.9215291787744917};
IbG={2.8425752327892844 2.5931576770598168;9.4789237295474873 7.9378928037841252};
p=2;
m=2;
signG=1;
n_vals=[2;2];
nFixedStates=4;
gamma_nn=[0.31559428834175318;9.2037894041383641];
th_star_guess=[2.4943180186983711;11.216948999754299];
%solution
x = x_th(1:nFixedStates);
th = x_th(nFixedStates+1:nFixedStates+p);
f = zeros(m,1);
G = zeros(m,m);
ZF = zeros(p,m);
ZG = zeros(p,m,m);
for i=1:m
[f(i), ZF(:,i)] = calculate_neural_output(x, IwF{i}, IbF{i}, th);
for j=1:m
[G(i,j), ZG(:,i,j)] = calculate_neural_output(x, IwG{i,j}, IbG{i,j}, th);
end
end
detG = det(G);
if m == 1
adjG = 1;
else
adjG = detG*G^-1;
end
E = zeros(m,1);
V = zeros(m,1);
R = zeros(m,m);
for i=1:m
EVR=calcEVR_handles{i}(t,x);
E(i)=EVR(1);
V(i)=EVR(2);
R(i,i)=EVR(3);
end
Rinv = R^-1;
prod_R_E = R*E;
ub = f + Rinv * (V + k*E) + nf*prod_R_E;
ua = - detG / (detG^2+dD) * (adjG * ub) ;
u = ua - signG * (hGa*(ua'*ua) + hGb*(ub'*ub)) * prod_R_E;
dx_th = zeros(nFixedStates+p, 1); %preallocation
%System in form (1) of the IEEE paper
[vec_sys_f, vec_sys_G] = sys_f_G(x);
dx_nm = vec_sys_f + vec_sys_G*u;
%Calculation of dx
index_start = 1;
index_end = -1;
for i=1:m
index_end = index_end + n_vals(i);
for j=index_start:index_end
dx_th(j) = x(j+1);
end
dx_th(index_end+1) = dx_nm(i);
index_start = index_end + 2;
end
%Calculation of dth
AFvalueT = zeros(p,m);
for i=1:m
AFvalueT(:,i) = 0;
for j=1:m
AFvalueT(:,i) = AFvalueT(:,i)+ZG(:,i,j)*ua(j);
end
end
dx_th(nFixedStates+1:nFixedStates+p) = diag(gamma_nn)*( (ZF+AFvalueT)*prod_R_E -s*(th-th_star_guess) );
display(t)
end
function [y, Z] = calculate_neural_output(input, Iw, Ib, state)
Z = [tanh(Iw*input+Ib);1];
y = state' * Z;
end
function [ f,g ] = sys_f_G( x )
Iz1=0.96;
Iz2=0.81;
m1=3.2;
m2=2.0;
l1=0.5;
l2=0.4;
g=9.81;
q1=x(1);
q2=x(3);
q1dot=x(2);
q2dot=x(4);
M=[Iz1+Iz2+m1*l1^2/4+m2*(l1^2+l2^2/4+l1*l2*cos(q2)),Iz2+m2*(l2^2/4+l1*l2*cos(q2)/2);Iz2+m2*(l2^2/4+l1*l2*cos(q2)/2),Iz2+m2*l2^2/4];
c=0.5*m2*l1*l2*sin(q2);
C=[-c*q2dot,-c*(q1dot+q2dot);c*q1dot,0];
G=[0.5*m1*g*l1*cos(q1)+m2*g*(l1*cos(q1)+0.5*l2*cos(q1+q2));0.5*m2*g*l2*cos(q1+q2)];
f=-M\(C*[q1dot;q2dot]+G);
g=inv(M);
end
end
Its target is to simulate the control of a 2-DOF robotic arm using a certain control law. The results I get after running the simulation are correct(I have a graph of the output I should expect), but it takes ages to finish!
Is there anything I could do to speed up the process?
In order to improve the computational speed of any integration in Matlab, a few options are available to you:
Reduce the required accuracy (which you already have done)
Use an adapted integrator. As mentioned by #sanchises, sometimes ode23 can be longer than another ode solver in Matlab (if your equation is stiff for instance). You could try to determine which solver is most adapted from the documentation... Or simply try them all!
The best solution, but by far the most time consuming, would be to use a compiled language, such as C or Fortran. If the integration is but a part of your Matlab program, you could use Mex files, and translate only the integration to a compiled language. You could also create dynamic libraries in your compiled language and load them in Matlab using loadlibrary. I use loadlibrary and an integration routine written in Fortran for the integration of orbits and trajectories, and I get over 100 times speedup with Fortran vs. Matlab! Of course, technically, the integration is not in Matlab anymore... But the library or Mex files trick allows you to only convert the integration part of your program to a different language! A number of open source integrators are available, such as ODEPACK or RKSUITE in Fortran. Then, you only need to create a wrapper and your dynamics function in the correct language.
So to put it in a nutshell, if you're going to use this integration a lot, I would advise using a compiled language. If not, you should make do with Matlab, and be patient!

Resources