Training using multiple GPUs - returnn

I want to train RETURRN on LibriSpeech dataset using multiple GPUs, but don't know how to do.
Is this possible? I don't see any option to enable it in .config file.

Yes, it is possible. You find some description in the documentation. It currently uses Horovod, and basically you have to set use_horovod = True, but please see the documentation for further details.

Related

RETURNN Librispeech Task: reused parameters of pretrained model for both LM and encoder-decoder model

I want to train RETURRN on LibriSpeech dataset reusing pretrained model of LM and encoder-decoder that has been offered on git, but don't know how to do. Is this possible? I don't see any option to enable it in .config file.
Yes, is it possible. The models can be downloaded here.
Do you just want to use the pretrained model for recognition? You don't need to do anything at all then. Just use it. Use the provided recognition scripts (see the paper, and the same repository).
Or do you want to train it further, using some additional data or so? In that case, this is also very simple. There is e.g. the option import_model_train_epoch1. But there are also related options, which you can use, depending on what you want to do exactly. See e.g. the comments in the code on preload_from_files.

In Omnet++, where can find different values of parameters?

If anyone know where can see different values (settings) of a parameter? For example, if we need to change a default value, which other options do we have to set.
Thanks
The default settings are located in the according .ned file of your module. If you have a setup for your simulation you're usually changing your parameters in the omnetpp.ini file.
The TicToc tutorial gives you all information according to this, especially TicToc 7 might be useful for you.

How to use PNG mask in segmentation object detection with Tensorflow

I am using tutorial from here.
I use mask_rcnn_inception_v2 detection model with my own dataset. I want to add PNG mask, i use some applications to do it. but I wonder how i put this data to be used in detection. I see the mention anywhere.
How to implement the PNG mask in object detection ? (where i put it, how to use it)
Do you know how to launch the evaluation and training in same time on tensorboard i see it is possible.
generally where i can ask all Tensorflow general question as configuration file explanations
On Github Tensorflow it is specified we have to ask question here because not a Tensorflow issue and great community here with some great guys!
thanks to guy on github he points a missing configuration in pipeline.config
number_of_stages: 3
and it changes all the result i can see the mask now. youpi !
For any further information, there's a good explanation here:
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/instance_segmentation.md
It explains how to prepare you masks and what to modify.

How do I build a n-attribute recommendation system in Ruby

I'm thinking about building a recommendation system in Ruby that accepts 8 attributes, the system will look at sample's matrix and then give recommendations based on the sample data. How do I do this? Thanks in advance

Pulling stats out of a text

I'd like to know what are the most recurrent in a given text or group of text (pulled from a database) in ruby.
Does anyone know what are the best practices?
You might start with statistical natural language processing. Also, you may be able to leverage one or more of the libraries mentioned on the AI Ruby Plugins page.

Resources