ReduceLROnPlateau with pytroch_lightning and horovod - pytorch-lightning

When I use torch.optim.lr_scheduler.ReduceLROnPlateau with horovod to train my net, horovod will check weather my lr_scheduler is pytorch_lightning.utilities.types ._LRScheduler or not, just like following (HorovodStrategy.set function in pytorch_lightning.strategies.horovod):
lr_scheduler_configs = self.lr_scheduler_configs
for config in lr_scheduler_configs:
scheduler = config.scheduler
assert isinstance(scheduler, _LRScheduler))
scheduler.base_lrs = [lr * self.world_size for lr in scheduler.base_lrs]
But,ReduceLROnPlateau does not inherent _torch.optim.lr_scheduler.LRScheduler.
Anyone knows how to use ReduceLROnPlateau with horovod?
this is my optimizer and lr_scheduler in LightningModule's configure_optimizers function:
optimizer, (sch, sch_val) = get_opt_sch_bertfinetune(self, conf, self.args, None, total_steps=total_steps, val_metric_mode='min')
# linear warm-up lr scheduler
sch = {
'scheduler': sch, # torch.optim.lr_schedule._LRScheduler, is ok with horovod assert check
'interval': 'step'
}
# ReduceLROnPlateau
sch_val = {
'scheduler': sch_val, # torch.optim.lr_scheduler.ReduceLROnPlateau
'monitor': self.val_metric_name,
'frequency': 1,
}
return [optimizer], [sch, sch_val]

Related

Problem with following along with notebook on kaggle "max() received an invalid combination of arguments" issue

For my studying purposes I am following along a very popular notebook for sentiment classification with Bert.
Kaggle notebook for sentiment classification with BERT
But in place of train the model like in notebook, i just load another model
MODEL_NAME = "nlptown/bert-base-multilingual-uncased-sentiment"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME)
and want to test this on my data, to get a heatmap and accuracy score likde on the end of this notebook.
But when i am at the step of evalution i get
TypeError: max() received an invalid combination of arguments - got (SequenceClassifierOutput, dim=int), but expected one of:
* (Tensor input)
* (Tensor input, Tensor other, *, Tensor out)
* (Tensor input, int dim, bool keepdim, *, tuple of Tensors out)
* (Tensor input, name dim, bool keepdim, *, tuple of Tensors out)
in evaluation function where it says
_, preds = torch.max(outputs, dim=1)
I tried to change this to
_, preds = torch.max(torch.tensor(outputs), dim=1)
But then a got another issue:
RuntimeError: Could not infer dtype of SequenceClassifierOutput
the method for evaluation looks like this:
def eval_model(model, data_loader, loss_fn, device, n_examples):
model = model.eval()
losses = []
correct_predictions = 0
with torch.no_grad():
for d in data_loader:
input_ids = d["input_ids"].to(device)
attention_mask = d["attention_mask"].to(device)
targets = d["targets"].to(device)
# Get model ouptuts
outputs = model(
input_ids=input_ids,
attention_mask=attention_mask,
)
_, preds = torch.max(outputs, dim=1)
loss = loss_fn(outputs, targets)
correct_predictions += torch.sum(preds == targets)
losses.append(loss.item())
return correct_predictions.double() / n_examples, np.mean(losses)
And outputs it self in the code above looks like this
SequenceClassifierOutput(loss=None, logits=tensor([[ 2.2241, 1.2025, 0.1638, -1.4620, -1.6424],
[ 3.1578, 1.3957, -0.1131, -1.8141, -1.9536],
[ 0.7273, 1.7851, 1.1237, -0.9063, -2.3822],
[ 0.9843, 0.9711, 0.5067, -0.7553, -1.4547],
[-0.4127, -0.8895, 0.0572, 0.3550, 0.7377],
[-0.4885, 0.6933, 0.8272, -0.3176, -0.7546],
[ 1.3953, 1.4224, 0.7842, -0.9143, -2.2898],
[-2.4618, -1.2675, 0.5480, 1.4326, 1.2893],
[ 2.5044, 0.9191, -0.1483, -1.4413, -1.4156],
[ 1.3901, 1.0331, 0.4259, -0.8006, -1.6999],
[ 4.2252, 2.6539, -0.0392, -2.6362, -3.3261],
[ 1.9750, 1.8845, 0.6779, -1.3163, -2.5570],
[ 5.1688, 2.2360, -0.6230, -2.9657, -2.9031],
[ 1.1857, 0.4277, -0.1837, -0.7163, -0.6682],
[ 2.1133, 1.3829, 0.5750, -1.3095, -2.2234],
[ 2.3258, 0.9406, -0.0115, -1.1673, -1.6775]], device='cuda:0'), hidden_states=None, attentions=None)
How i can make it work?
Kind regards

Adding Multiple Hosts to INET/OMNET++ Throughput Example

I have been working on adding more than one host to the INET throughput example.
inet/showcases/wireless/throughput
However, when I run the code after adding more hosts, the graph generated looks pretty similar to the original and I expected some sort of obvious difference - which leaves me to believe that there is something wrong with the code.
Original example code:
throughput.ini
[General]
[Config Throughput]
network = Throughput
sim-time-limit = 1s
*.*Host.ipv4.arp.typename = "GlobalArp"
*.*Host.wlan[*].mgmt.typename = "Ieee80211MgmtAdhoc"
*.*Host.wlan[*].agent.typename = ""
*.*Host.wlan[*].opMode = "g(erp)"
*.*Host.wlan[*].bitrate = ${bitrate = 6,9,12,18,24,36,48,54}Mbps
*.*Host.wlan[*].mac.dcf.originatorMacDataService.fragmentationPolicy.fragmentationThreshold = 2304B + 28B
*.*Host.wlan[*].radio.separateReceptionParts = true
*.*Host.wlan[*].radio.separateTransmissionParts = true
*.sourceHost.numApps = 1
*.sourceHost.app[0].typename = "UdpBasicApp"
*.sourceHost.app[*].destAddresses = "destinationHost"
*.sourceHost.app[*].destPort = 5000
*.sourceHost.app[*].packetName = "UDPData-"
*.sourceHost.app[*].startTime = 0s
*.sourceHost.app[*].messageLength = ${packetLength = 100, 1000, 2268}byte
*.sourceHost.app[*].sendInterval = ${packetLength} * 8 / ${bitrate} * 1us
*.destinationHost.numApps = 1
*.destinationHost.app[0].typename = "UdpSink"
*.destinationHost.app[*].localPort = 5000
throughput.ned
package inet.showcases.wireless.throughput;
import inet.networklayer.configurator.ipv4.Ipv4NetworkConfigurator;
import inet.node.inet.WirelessHost;
import inet.physicallayer.ieee80211.packetlevel.Ieee80211ScalarRadioMedium;
network Throughput
{
#display("bgb=6,4");
#statistic[throughput](source=liveThroughput(destinationHost.app[0].packetReceived)/1000000; record=figure; targetFigure=panel.throughput; checkSignals=false);
#statistic[numRcvdPk](source=count(destinationHost.app[0].packetReceived); record=figure; targetFigure=panel.numRcvdPkCounter; checkSignals=false);
#figure[panel](type=panel; pos=1.5,0.1);
// #figure[panel.throughput](type=gauge; pos=0,0; size=100,100; minValue=0; maxValue=40; tickSize=5; label="App level throughput [Mbps]");
#figure[panel.throughput](type=linearGauge; pos=250,50; size=250,30; minValue=0; maxValue=54; tickSize=6; label="Application level throughput [Mbps]");
#figure[panel.numRcvdPkCounter](type=counter; pos=50,50; label="Packets received"; decimalPlaces=6);
submodules:
sourceHost: WirelessHost {
#display("p=3.019269,2.746169");
}
destinationHost: WirelessHost {
#display("p=4.369595,1.8054924");
}
configurator: Ipv4NetworkConfigurator {
#display("p=1.0772266,0.6220604");
}
radioMedium: Ieee80211ScalarRadioMedium {
#display("p=1.03171,1.9723867");
}
}
The way I did it was copying the source host submodule in throughput.ned and giving it a name like sourceHost2. Then, I modified the ini file to accommodate for the changes for example, changing
*.*Host.ipv4.arp.typename = "GlobalArp"
to
*.*Host*.ipv4.arp.typename = "GlobalArp"
This runs fine but does not really change the throughput at all - one or two slightly harsher peaks/troughs in the graphs but nothing noticeable - any ideas?

How to distribute and fit h2o model by group in sparklyr

I would like to fit a model by group in h2o using some type of distributed apply function.
I tried the following but it doesn't work. Probably due to the fact I cannot pipe the sc object through.
df%>%
spark_apply(function(e)
h2o.coxph(x = predictors,
event_column = "event",
stop_column = "time_to_next",
training_frame = as_h2o_frame(sc, e, strict_version_check = FALSE))
group_by = "id"
)
I receive a pretty generic spark error like this:
error : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 23.0 failed 4 times, most recent failure: Lost task 0.3 in stage 23.0 :
I'm not sure if you can return an entire H2OCoxPH model from sparklyr::spark_apply(): Errors are no method for coercing this S4 class to a vector if you set the fetch_result_as_sdf argument to FALSE and cannot coerce class ‘structure("H2OCoxPHModel", package = "h2o")’ to a data.frame if set to TRUE.
But if you can make your own vector or dataframe from the relevant parts of the model, I think you can do it.
Here I'll use a sample Cox Proportional Hazards file from H2O Docs Cox Proportional Hazards (CoxPH) and I'll use group_by = "surgery".
heart_hf <- h2o::h2o.importFile("http://s3.amazonaws.com/h2o-public-test-data/smalldata/coxph_test/heart.csv")
##### Convert to Spark DataFrame since I assume that is the use case
heart_sf <- sparklyr::copy_to(sc, heart_hf %>% as.data.frame())
##### Use sparklyr::spark_apply() on Spark DataFrame to "distribute and fit h2o model by group"
sparklyr::spark_apply(
x = heart_sf,
f = function(x) {
h2o::h2o.init()
heart_coxph <- h2o::h2o.coxph(x = c("age", "year"),
event_column = "event",
start_column = "start",
stop_column = "stop",
ties = "breslow",
training_frame = h2o::as.h2o(x, strict_version_check = FALSE))
return(data.frame(conc = heart_coxph#model$model_summary$concordance))
},
columns = list(surgery = "integer", conc = "numeric"),
group_by = c("surgery"))
# Source: spark<?> [?? x 2]
surgery conc
<int> <dbl>
1 1 0.588
2 0 0.614

How to do parallel processing in pytorch

I am working on a deep learning problem. I am solving it using pytorch. I have two GPU's which are on the same machine (16273MiB,12193MiB). I want to use both the GPU's for my training (video dataset).
I get a warning:
There is an imbalance between your GPUs. You may want to exclude GPU 1 which
has less than 75% of the memory or cores of GPU 0. You can do so by setting
the device_ids argument to DataParallel, or by setting the CUDA_VISIBLE_DEVICES
environment variable.
warnings.warn(imbalance_warn.format(device_ids[min_pos], device_ids[max_pos]))
I also get an error:
raise TypeError('Broadcast function not implemented for CPU tensors')
TypeError: Broadcast function not implemented for CPU tensors
if __name__ == '__main__':
opt.scales = [opt.initial_scale]
for i in range(1, opt.n_scales):
opt.scales.append(opt.scales[-1] * opt.scale_step)
opt.arch = '{}-{}'.format(opt.model, opt.model_depth)
opt.mean = get_mean(opt.norm_value)
opt.std = get_std(opt.norm_value)
print("opt",opt)
with open(os.path.join(opt.result_path, 'opts.json'), 'w') as opt_file:
json.dump(vars(opt), opt_file)
torch.manual_seed(opt.manual_seed)
model, parameters = generate_model(opt)
#print(model)
pytorch_total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
print("Total number of trainable parameters: ", pytorch_total_params)
# Define Class weights
if opt.weighted:
print("Weighted Loss is created")
if opt.n_finetune_classes == 2:
weight = torch.tensor([1.0, 3.0])
else:
weight = torch.ones(opt.n_finetune_classes)
else:
weight = None
criterion = nn.CrossEntropyLoss()
if not opt.no_cuda:
criterion = nn.DataParallel(criterion.cuda())
if opt.no_mean_norm and not opt.std_norm:
norm_method = Normalize([0, 0, 0], [1, 1, 1])
elif not opt.std_norm:
norm_method = Normalize(opt.mean, [1, 1, 1])
else:
norm_method = Normalize(opt.mean, opt.std)
train_loader = torch.utils.data.DataLoader(
training_data,
batch_size=opt.batch_size,
shuffle=True,
num_workers=opt.n_threads,
pin_memory=True)
train_logger = Logger(
os.path.join(opt.result_path, 'train.log'),
['epoch', 'loss', 'acc', 'precision','recall','lr'])
train_batch_logger = Logger(
os.path.join(opt.result_path, 'train_batch.log'),
['epoch', 'batch', 'iter', 'loss', 'acc', 'precision', 'recall', 'lr'])
if opt.nesterov:
dampening = 0
else:
dampening = opt.dampening
optimizer = optim.SGD(
parameters,
lr=opt.learning_rate,
momentum=opt.momentum,
dampening=dampening,
weight_decay=opt.weight_decay,
nesterov=opt.nesterov)
# scheduler = lr_scheduler.ReduceLROnPlateau(
# optimizer, 'min', patience=opt.lr_patience)
if not opt.no_val:
spatial_transform = Compose([
Scale(opt.sample_size),
CenterCrop(opt.sample_size),
ToTensor(opt.norm_value), norm_method
])
print('run')
for i in range(opt.begin_epoch, opt.n_epochs + 1):
if not opt.no_train:
adjust_learning_rate(optimizer, i, opt.lr_steps)
train_epoch(i, train_loader, model, criterion, optimizer, opt,
train_logger, train_batch_logger)
I have also made changes in my train file:
model = nn.DataParallel(model(),device_ids=[0,1]).cuda()
outputs = model(inputs)
It does not seem to work properly and is giving error. Please advice, I am new to pytorch.
Thanks
As mentioned in this link, you have to do model.cuda() before passing it to nn.DataParallel.
net = nn.DataParallel(model.cuda(), device_ids=[0,1])
https://github.com/pytorch/pytorch/issues/17065

Unable to predict when loading a Tensorflow model in Go

I've loaded a Tensorflow model in Go and cannot get predictions - it keeps complaining about shape mismatch - a simple 2d array. Would appreciate an idea here, thank you so much in advance.
Error running the session with input, err: You must feed a value for placeholder tensor 'theoutput_target' with dtype float
[[Node: theoutput_target = Placeholder[_output_shapes=[[?,?]], dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Input tensor being sent is a [][]float32{ {1.0}, }
a := [][]float32{ {1.0}, }
tensor, terr := tf.NewTensor(a)
if terr != nil {
fmt.Printf("Error creating input tensor: %s\n", terr.Error())
return
}
result, runErr := model.Session.Run(
map[tf.Output]*tf.Tensor{
model.Graph.Operation("theinput").Output(0): tensor,
},
[]tf.Output{
model.Graph.Operation("theoutput_target").Output(0),
},
nil,
)
and the model is generated via Keras and exported to TF using SavedModelBuilder after:
layer_name_input = "theinput"
layer_name_output = "theoutput"
def get_encoder():
model = Sequential()
model.add(Dense(5, input_dim=1))
model.add(Activation("relu"))
model.add(Dense(5, input_dim=1))
return model
inputs = Input(shape=(1, ), name=layer_name_input)
encoder = get_encoder()
model = encoder(inputs)
model = Activation("relu")(model)
objective = Dense(1, name=layer_name_output)(model)
model = Model(inputs=[inputs], outputs=objective)
model.compile(loss='mean_squared_error', optimizer='sgd')
EDIT - fixed, it was a problem with exporting from Keras to TF (layer names). Pasting the export here, hopefully helpful for someone else:
def export_to_tf(keras_model_path, export_path, export_version, is_functional=False):
sess = tf.Session()
K.set_session(sess)
K.set_learning_phase(0)
export_path = os.path.join(export_path, str(export_version))
model = load_model(keras_model_path)
config = model.get_config()
weights = model.get_weights()
if is_functional == True:
model = Model.from_config(config)
else:
model = Sequential.from_config(config)
model.set_weights(weights)
with K.get_session() as sess:
inputs = [ (model_input.name.split(":")[0], model_input) for model_input in model.inputs]
outputs = [ (model_output.name.split(":")[0], model_output) for model_output in model.outputs]
signature = predict_signature_def(inputs=dict(inputs),
outputs=dict(outputs))
input_descriptor = [ { 'name': item[0], 'shape': item[1].shape.as_list() } for item in inputs]
output_descriptor = [ { 'name': item[0], 'shape': item[1].shape.as_list() } for item in outputs]
builder = saved_model_builder.SavedModelBuilder(export_path)
builder.add_meta_graph_and_variables(
sess=sess,
tags=[tag_constants.SERVING],
signature_def_map={signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature})
builder.save()
descriptor = dict()
descriptor["inputs"] = input_descriptor
descriptor["outputs"] = output_descriptor
pprint.pprint(descriptor)
That's something strange in your code and error. Tensorflow is complaining about a missing value for the placeholder with name 'theoutput_target', whilst this placeholder is never defined in the code you posted. Instead, your code defines a placeholder whose name is 'theinput'.
Also, I suggest you to use a more complete and easy to use wrapper around the tensorflow API: tfgo

Resources