OpenMDAO hierarchical solvers recording - solver

In OpenMDAO, is there any recommendation on how to record and read solver cases if the model is composed of multiple groups/cycles, and multiple nonlinear solvers?
I have a model built of 2 cycles (cycle1 and cycle2), one of them containing two subcycles (cycle1_1 and cycle1_2). For now I am attaching a solver to each of my nonlinear solvers:
solver1 = model.cycle1.nonlinear_solver
solver1_1 = model.cycle1.cycle1_1.nonlinear_solver
solver1_2 = model.cycle1.cycle1_2.nonlinear_solver
solver2 = model.cycle2.nonlinear_solver
solver1.add_recorder(recorder)
solver1_1.add_recorder(recorder)
solver1_2.add_recorder(recorder)
solver2.add_recorder(recorder)
When trying to read the results with:
cr = om.CaseReader(results)
I am getting the following error:
RuntimeError: Can't parse solver iteration coordinate:
rank0:root._solve_nonlinear|0|NLRunOnce|0|cycle1._solve_nonlinear|0|NonlinearBlockGS|1|cycle1.cycle1_1._solve_nonlinear|1|NonlinearBlockGS|1
I am looking to get information about the convergence history and some plots on the coupling variables.
EDIT: My code has a structure similar to that in https://openmdao.org/newdocs/versions/latest/basic_user_guide/multidisciplinary_optimization/sellar.html, with the groups defined in setup:
import openmdao.api as om
class MDA(om.Group):
# class ObjCmp(om.ExplicitComponent):
# some objective component
# class ConCmp(om.ExplicitComponent):
# some constraint component
def setup(self):
cycle1 = self.add_subsystem('cycle1', om.Group(), promotes=['*'])
cycle1_1 = cycle1.add_subsystem('cycle1_1', om.Group(), promotes=['*'])
cycle1_1_comp = cycle1_1.add_subsystem('comp', om.ExecComp('x1 = 3 + x2'), promotes=["*"])
cycle1_2 = cycle1.add_subsystem('cycle1_2', om.Group(), promotes=['*'])
cycle1_2_comp = cycle1_2.add_subsystem('comp', om.ExecComp('x2 = 3 + x1 + y'), promotes=["*"])
cycle2 = self.add_subsystem('cycle2', om.Group(), promotes=['*'])
cycle2.add_subsystem('comp', om.ExecComp('y = x1 + 2'), promotes=['*'])
p = om.Problem(model=MDA())
model = p.model
p.setup()
p.run_model()

Unfortunately, as of OpenMDAO V3.16 this looks like a bug. Its been logged as a high priority issue on the OpenMDAO development backlog: Issue #2453
I can replicate it with the following script:
import openmdao.api as om
p = om.Problem()
model = p.model
cycle1 = p.model.add_subsystem('cycle1', om.Group(), promotes=['*'])
cycle1_1 = cycle1.add_subsystem('cycle1_1', om.Group(), promotes=['*'])
cycle1_1_comp = cycle1_1.add_subsystem('comp', om.ExecComp('x1 = 3 + x2'), promotes=["*"])
cycle1_2 = cycle1.add_subsystem('cycle1_2', om.Group(), promotes=['*'])
cycle1_2_comp = cycle1_2.add_subsystem('comp', om.ExecComp('x2 = 3 + x1 + y'), promotes=["*"])
cycle2 = p.model.add_subsystem('cycle2', om.Group(), promotes=['*'])
cycle2.add_subsystem('comp', om.ExecComp('y = x1 + 2'), promotes=['*'])
solver1 = model.cycle1.nonlinear_solver
solver1_1 = model.cycle1.cycle1_1.nonlinear_solver
solver1_2 = model.cycle1.cycle1_2.nonlinear_solver
solver2 = model.cycle2.nonlinear_solver
print(solver1, solver1_1, solver1_2, solver2)
recorder = om.SqliteRecorder('cases.db')
solver1.add_recorder(recorder)
# recorders on nested solvers trigger the bug
# solver1_1.add_recorder(recorder)
# solver1_2.add_recorder(recorder)
# Kind-of workaround, put the recorder on the child component/group instead
cycle1_1_comp.add_recorder(recorder)
cycle1_2_comp.add_recorder(recorder)
solver2.add_recorder(recorder)
p.setup()
p.run_model()
reader = om.CaseReader('cases.db')
print(reader.list_sources())
It seems to be the nested recorders that are triggering the bug. As a kind-of workaround you can stick the recorder on the lower level group/component instead. That will make it a bit harder to know which cases are from which solver iteration, but the naming scheme of the iteration coordinates should at least help a little there. Hopefully that gets you moving in the meantime, while the bug is fixed.

Related

fable package: Could not find an appropriate ARIMA model

I am trying to fit the Arima model to hourly data. First, I tried fable package, and the ARIMA function could not find the appropriate model. Second, I used forecast package with auto.arima function, which worked perfectly. I have one example series (available here: https://gist.github.com/mizhozan/800fec80682822969e7d35ebba395) and the results as an example here:
data.arima <- read.csv('test.csv', header = TRUE)[,-1]
## fable package
data.arima$Date <- lubridate::ymd_hms(data.arima$Date, truncated = 2)
library(tidyverse)
library(fable)
result.arima <- data.arima %>%
as_tsibble(., index = Date)%>%
model(ARIMA(value ~ PDQ() + pdq() +
fourier(period = "day", K = 3) +
fourier(period = "week", K = 2), seasonal.test = "ocsb")) %>%
forecast(h = 24)
Warning message:
1 error encountered for ARIMA(value ~ PDQ() + pdq() + fourier(period = "day", K = 3) +
fourier(period = "week", K = 2), seasonal.test = "ocsb")
[1] Could not find an appropriate ARIMA model.
This is likely because automatic selection does not select models with characteristic roots that may be numerically unstable.
For more details, refer to https://otexts.com/fpp3/arima-r.html#plotting-the-characteristic-roots
## forecast package
library(forecast)
series.arima <- msts(data.arima$value, seasonal.periods = c(24, 24*7))
model.arima <- auto.arima(series.arima, seasonal.test = "ocsb", xreg=fourier(series.arima,K=c(3,2)))
Series: series.arima
Regression with ARIMA(4,0,1) errors
Coefficients:
ar1 ar2 ar3 ar4 ma1 intercept S1-24 C1-24 S2-24 C2-24 S3-24 C3-24 S1-168 C1-168 S2-168 C2-168
1.9064 -1.4934 0.8292 -0.3056 -0.8728 664263.21 -310891.13 -349744.23 -133862.32 -20587.2 69313.88 51963.803 43880.66 1524.578 -3823.166 5642.26
s.e. 0.0755 0.1192 0.1085 0.0521 0.0605 7781.72 20778.06 20591.69 11662.66 11606.0 8792.99 8768.856 11342.32 11669.244 12819.074 13091.08
sigma^2 estimated as 5.122e+09: log likelihood=-4225.19
AIC=8484.38 AICc=8486.31 BIC=8549.28
result.arima.2 <- forecast(model.arima, xreg=fourier(series.arima, K = c(3,2), h = 24))
I would appreciate that if someone could explain the problem here.

[XAI for transformer custom model using AllenNLP]

I have been solving the NER problem for a Vietnamese dataset with 15 tags in IO format. I have been using the AllenNLP Interpret Toolkit for my model, but I can not configure it completely.
I have used a pre-trained language model "xlm-roberta-base" based-on HuggingFace. I have concatenated 4 last bert layers, and pass through to linear layer. The model architecture you can see in the source below.
class BaseBertSoftmax(nn.Module):
def __init__(self, model, drop_out , num_labels):
super(BaseBertSoftmax, self).__init__()
self.num_labels = num_labels
self.model = model
self.dropout = nn.Dropout(drop_out)
self.classifier = nn.Linear(4*768, num_labels) # 4 last of layer
def forward_custom(self, input_ids, attention_mask=None,
labels=None, head_mask=None):
outputs = self.model(input_ids = input_ids, attention_mask=attention_mask)
sequence_output = torch.cat((outputs[1][-1], outputs[1][-2], outputs[1][-3], outputs[1][-4]),-1)
sequence_output = self.dropout(sequence_output)
logits = self.classifier(sequence_output) # bsz, seq_len, num_labels
outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here
if labels is not None:
loss_fct = nn.CrossEntropyLoss(ignore_index=0)
if attention_mask is not None:
active_loss = attention_mask.view(-1) == 1
active_logits = logits.view(-1, self.num_labels)[active_loss]
active_labels = labels.view(-1)[active_loss]
loss = loss_fct(active_logits, active_labels)
else:
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
outputs = (loss,) + outputs
return outputs #scores, (hidden_states), (attentions)
What steps do I have to take to integrate this model to AllenNLP Interpret?
Could you please help me with this problem?

How can I get the score from Question-Answer Pipeline? Is there a bug when Question-answer pipeline is used?

When I run the following code
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
model = AutoModelForQuestionAnswering.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
text = r"""
As checked Dis is not yet on boarded to ARB portal, hence we cannot upload the invoices in portal
"""
questions = [
"Dis asked if it is possible to post the two invoice in ARB.I have not access so I wanted to check if you would be able to do it.",
]
for question in questions:
inputs = tokenizer.encode_plus(question, text, add_special_tokens=True, return_tensors="pt")
input_ids = inputs["input_ids"].tolist()[0]
text_tokens = tokenizer.convert_ids_to_tokens(input_ids)
answer_start_scores, answer_end_scores = model(**inputs)
answer_start = torch.argmax(
answer_start_scores
) # Get the most likely beginning of answer with the argmax of the score
answer_end = torch.argmax(answer_end_scores) + 1 # Get the most likely end of answer with the argmax of the score
answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))
print(f"Question: {question}")
print(f"Answer: {answer}\n")
The answer that I get here is:
Question: Dis asked if it is possible to post the two invoice in ARB.I have not access so I wanted to check if you would be able to do it.
Answer: dis is not yet on boarded to ARB portal
How do I get a score for this answer? Score here is very similar to what is I get when I run Question-Answer pipeline .
I have to take this approach since Question-Answer pipeline when used is giving me Key Error for the below code
from transformers import pipeline
nlp = pipeline("question-answering")
context = r"""
As checked Dis is not yet on boarded to ARB portal, hence we cannot upload the invoices in portal.
"""
print(nlp(question="Dis asked if it is possible to post the two invoice in ARB?", context=context))
This is my attempt to get the score. It appears that I cannot figure out what feature.p_mask. So I could not remove the non-context indexes that contribute to the softmax at the moment.
# ... assuming imports and question and context
model_name="deepset/roberta-base-squad2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
inputs = tokenizer(question, context,
add_special_tokens=True,
return_tensors='pt')
input_ids = inputs['input_ids'].tolist()[0]
outputs = model(**inputs)
# used to compute score
start = outputs.start_logits.detach().numpy()
end = outputs.end_logits.detach().numpy()
# from source code
# Ensure padded tokens & question tokens cannot belong to the set of candidate answers.
#?? undesired_tokens = np.abs(np.array(feature.p_mask) - 1) & feature.attention_mask
# Generate mask
undesired_tokens = inputs['attention_mask']
undesired_tokens_mask = undesired_tokens == 0.0
# Make sure non-context indexes in the tensor cannot contribute to the softmax
start_ = np.where(undesired_tokens_mask, -10000.0, start)
end_ = np.where(undesired_tokens_mask, -10000.0, end)
# Normalize logits and spans to retrieve the answer
start_ = np.exp(start_ - np.log(np.sum(np.exp(start_), axis=-1, keepdims=True)))
end_ = np.exp(end_ - np.log(np.sum(np.exp(end_), axis=-1, keepdims=True)))
# Compute the score of each tuple(start, end) to be the real answer
outer = np.matmul(np.expand_dims(start_, -1), np.expand_dims(end_, 1))
# Remove candidate with end < start and end - start > max_answer_len
max_answer_len = 15
candidates = np.tril(np.triu(outer), max_answer_len - 1)
scores_flat = candidates.flatten()
idx_sort = [np.argmax(scores_flat)]
start, end = np.unravel_index(idx_sort, candidates.shape)[1:]
end += 1
score = candidates[0, start, end-1]
start, end, score = start.item(), end.item(), score.item()
print(tokenizer.decode(input_ids[start:end]))
print(score)
See more source code

How to do parallel processing in pytorch

I am working on a deep learning problem. I am solving it using pytorch. I have two GPU's which are on the same machine (16273MiB,12193MiB). I want to use both the GPU's for my training (video dataset).
I get a warning:
There is an imbalance between your GPUs. You may want to exclude GPU 1 which
has less than 75% of the memory or cores of GPU 0. You can do so by setting
the device_ids argument to DataParallel, or by setting the CUDA_VISIBLE_DEVICES
environment variable.
warnings.warn(imbalance_warn.format(device_ids[min_pos], device_ids[max_pos]))
I also get an error:
raise TypeError('Broadcast function not implemented for CPU tensors')
TypeError: Broadcast function not implemented for CPU tensors
if __name__ == '__main__':
opt.scales = [opt.initial_scale]
for i in range(1, opt.n_scales):
opt.scales.append(opt.scales[-1] * opt.scale_step)
opt.arch = '{}-{}'.format(opt.model, opt.model_depth)
opt.mean = get_mean(opt.norm_value)
opt.std = get_std(opt.norm_value)
print("opt",opt)
with open(os.path.join(opt.result_path, 'opts.json'), 'w') as opt_file:
json.dump(vars(opt), opt_file)
torch.manual_seed(opt.manual_seed)
model, parameters = generate_model(opt)
#print(model)
pytorch_total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
print("Total number of trainable parameters: ", pytorch_total_params)
# Define Class weights
if opt.weighted:
print("Weighted Loss is created")
if opt.n_finetune_classes == 2:
weight = torch.tensor([1.0, 3.0])
else:
weight = torch.ones(opt.n_finetune_classes)
else:
weight = None
criterion = nn.CrossEntropyLoss()
if not opt.no_cuda:
criterion = nn.DataParallel(criterion.cuda())
if opt.no_mean_norm and not opt.std_norm:
norm_method = Normalize([0, 0, 0], [1, 1, 1])
elif not opt.std_norm:
norm_method = Normalize(opt.mean, [1, 1, 1])
else:
norm_method = Normalize(opt.mean, opt.std)
train_loader = torch.utils.data.DataLoader(
training_data,
batch_size=opt.batch_size,
shuffle=True,
num_workers=opt.n_threads,
pin_memory=True)
train_logger = Logger(
os.path.join(opt.result_path, 'train.log'),
['epoch', 'loss', 'acc', 'precision','recall','lr'])
train_batch_logger = Logger(
os.path.join(opt.result_path, 'train_batch.log'),
['epoch', 'batch', 'iter', 'loss', 'acc', 'precision', 'recall', 'lr'])
if opt.nesterov:
dampening = 0
else:
dampening = opt.dampening
optimizer = optim.SGD(
parameters,
lr=opt.learning_rate,
momentum=opt.momentum,
dampening=dampening,
weight_decay=opt.weight_decay,
nesterov=opt.nesterov)
# scheduler = lr_scheduler.ReduceLROnPlateau(
# optimizer, 'min', patience=opt.lr_patience)
if not opt.no_val:
spatial_transform = Compose([
Scale(opt.sample_size),
CenterCrop(opt.sample_size),
ToTensor(opt.norm_value), norm_method
])
print('run')
for i in range(opt.begin_epoch, opt.n_epochs + 1):
if not opt.no_train:
adjust_learning_rate(optimizer, i, opt.lr_steps)
train_epoch(i, train_loader, model, criterion, optimizer, opt,
train_logger, train_batch_logger)
I have also made changes in my train file:
model = nn.DataParallel(model(),device_ids=[0,1]).cuda()
outputs = model(inputs)
It does not seem to work properly and is giving error. Please advice, I am new to pytorch.
Thanks
As mentioned in this link, you have to do model.cuda() before passing it to nn.DataParallel.
net = nn.DataParallel(model.cuda(), device_ids=[0,1])
https://github.com/pytorch/pytorch/issues/17065

How to speed up MATLAB integration?

I have the following code:
function [] = Solver( t )
%pre-declaration
foo=[1,1,1];
fooCell = num2cell(foo);
[q, val(q), star]=fooCell{:};
%functions used in prosomoiwsh
syms q val(q) star;
qd1=symfun(90*pi/180+30*pi/180*cos(q),q);
qd2=symfun(90*pi/180+30*pi/180*sin(q),q);
p1=symfun(79*pi/180*exp(-1.25*q)+pi/180,q);
p2=symfun(79*pi/180*exp(-1.25*q)+pi/180,q);
e1=symfun(val-qd1,q);
e2=symfun(val-qd2,q);
T1=symfun(log(-(1+star)/star),star);
T2=symfun(log(star/(1-star)),star);
%anonymous function handles
lambda=[0.75;10.494441313222076];
calcEVR_handles={#(t,x)[double(subs(diff(subs(T1,star,e1/p1),q)+subs(lambda(1)*T1,star,e1/p1),{diff(val,q);val;q},{x(2);x(1);t})),double(subs(diff(subs(T1,star,e1/p1),q)+subs(lambda(1)*T1,star,e1/p1),{diff(val,q);val;q},{0;x(1);t})),double(subs(double(subs(subs(diff(T1,star),star,e1/p1),{val;q},{x(1);t}))/p1,q,t))];#(t,x)[double(subs(diff(subs(T2,star,e2/p2),q)+subs(lambda(2)*T2,star,e2/p2),{diff(val,q);val;q},{x(4);x(3);t})),double(subs(diff(subs(T2,star,e2/p2),q)+subs(lambda(2)*T2,star,e2/p2),{diff(val,q);val;q},{0;x(3);t})),double(subs(double(subs(subs(diff(T2,star),star,e2/p2),{val;q},{x(3);t}))/p2,q,t))]};
options = odeset('AbsTol',1e-1,'RelTol',1e-1);
[T,x_r] = ode23(#prosomoiwsh,[0 t],[80*pi/180;0;130*pi/180;0;2.4943180186983711;11.216948999754299],options);
save newresult T x_r
function dx_th = prosomoiwsh(t,x_th)
%declarations
k=0.80773938740480955;
nf=6.2860930902603602;
hGa=0.16727117784664769;
hGb=0.010886618389781832;
dD=0.14062935253218495;
s=0.64963817519705203;
IwF={[4.5453398382686956 5.2541234145178066 -6.5853972592002235 7.695225990702979];[-4.4358339284697337 -8.1138542053372298 -8.2698210582548395 3.9739729629084071]};
IwG={[5.7098975358444752 4.2470526600975802 -0.83412489434697168 0.53829395964565041] [1.8689492167233894 -0.0015017513794517434 8.8666804106266461 -1.0775021663921467];[6.9513235639494155 -0.8133752392893685 7.4032432556804162 3.1496138243338709] [5.8037182454981568 2.0933267947187457 4.852362963697928 -0.10745559204132382]};
IbF={-1.2165533594615545;7.9215291787744917};
IbG={2.8425752327892844 2.5931576770598168;9.4789237295474873 7.9378928037841252};
p=2;
m=2;
signG=1;
n_vals=[2;2];
nFixedStates=4;
gamma_nn=[0.31559428834175318;9.2037894041383641];
th_star_guess=[2.4943180186983711;11.216948999754299];
%solution
x = x_th(1:nFixedStates);
th = x_th(nFixedStates+1:nFixedStates+p);
f = zeros(m,1);
G = zeros(m,m);
ZF = zeros(p,m);
ZG = zeros(p,m,m);
for i=1:m
[f(i), ZF(:,i)] = calculate_neural_output(x, IwF{i}, IbF{i}, th);
for j=1:m
[G(i,j), ZG(:,i,j)] = calculate_neural_output(x, IwG{i,j}, IbG{i,j}, th);
end
end
detG = det(G);
if m == 1
adjG = 1;
else
adjG = detG*G^-1;
end
E = zeros(m,1);
V = zeros(m,1);
R = zeros(m,m);
for i=1:m
EVR=calcEVR_handles{i}(t,x);
E(i)=EVR(1);
V(i)=EVR(2);
R(i,i)=EVR(3);
end
Rinv = R^-1;
prod_R_E = R*E;
ub = f + Rinv * (V + k*E) + nf*prod_R_E;
ua = - detG / (detG^2+dD) * (adjG * ub) ;
u = ua - signG * (hGa*(ua'*ua) + hGb*(ub'*ub)) * prod_R_E;
dx_th = zeros(nFixedStates+p, 1); %preallocation
%System in form (1) of the IEEE paper
[vec_sys_f, vec_sys_G] = sys_f_G(x);
dx_nm = vec_sys_f + vec_sys_G*u;
%Calculation of dx
index_start = 1;
index_end = -1;
for i=1:m
index_end = index_end + n_vals(i);
for j=index_start:index_end
dx_th(j) = x(j+1);
end
dx_th(index_end+1) = dx_nm(i);
index_start = index_end + 2;
end
%Calculation of dth
AFvalueT = zeros(p,m);
for i=1:m
AFvalueT(:,i) = 0;
for j=1:m
AFvalueT(:,i) = AFvalueT(:,i)+ZG(:,i,j)*ua(j);
end
end
dx_th(nFixedStates+1:nFixedStates+p) = diag(gamma_nn)*( (ZF+AFvalueT)*prod_R_E -s*(th-th_star_guess) );
display(t)
end
function [y, Z] = calculate_neural_output(input, Iw, Ib, state)
Z = [tanh(Iw*input+Ib);1];
y = state' * Z;
end
function [ f,g ] = sys_f_G( x )
Iz1=0.96;
Iz2=0.81;
m1=3.2;
m2=2.0;
l1=0.5;
l2=0.4;
g=9.81;
q1=x(1);
q2=x(3);
q1dot=x(2);
q2dot=x(4);
M=[Iz1+Iz2+m1*l1^2/4+m2*(l1^2+l2^2/4+l1*l2*cos(q2)),Iz2+m2*(l2^2/4+l1*l2*cos(q2)/2);Iz2+m2*(l2^2/4+l1*l2*cos(q2)/2),Iz2+m2*l2^2/4];
c=0.5*m2*l1*l2*sin(q2);
C=[-c*q2dot,-c*(q1dot+q2dot);c*q1dot,0];
G=[0.5*m1*g*l1*cos(q1)+m2*g*(l1*cos(q1)+0.5*l2*cos(q1+q2));0.5*m2*g*l2*cos(q1+q2)];
f=-M\(C*[q1dot;q2dot]+G);
g=inv(M);
end
end
Its target is to simulate the control of a 2-DOF robotic arm using a certain control law. The results I get after running the simulation are correct(I have a graph of the output I should expect), but it takes ages to finish!
Is there anything I could do to speed up the process?
In order to improve the computational speed of any integration in Matlab, a few options are available to you:
Reduce the required accuracy (which you already have done)
Use an adapted integrator. As mentioned by #sanchises, sometimes ode23 can be longer than another ode solver in Matlab (if your equation is stiff for instance). You could try to determine which solver is most adapted from the documentation... Or simply try them all!
The best solution, but by far the most time consuming, would be to use a compiled language, such as C or Fortran. If the integration is but a part of your Matlab program, you could use Mex files, and translate only the integration to a compiled language. You could also create dynamic libraries in your compiled language and load them in Matlab using loadlibrary. I use loadlibrary and an integration routine written in Fortran for the integration of orbits and trajectories, and I get over 100 times speedup with Fortran vs. Matlab! Of course, technically, the integration is not in Matlab anymore... But the library or Mex files trick allows you to only convert the integration part of your program to a different language! A number of open source integrators are available, such as ODEPACK or RKSUITE in Fortran. Then, you only need to create a wrapper and your dynamics function in the correct language.
So to put it in a nutshell, if you're going to use this integration a lot, I would advise using a compiled language. If not, you should make do with Matlab, and be patient!

Resources