Why TensorFlow in Go didn't find the optimizer as python? - go

I am a newbie of TensorFlow in Go.
There are some doubts during my first traing demo. I just find one optimizer in Go's wrappers.go.
But i learn the demos of python,they has serveral optimizers. Like
GradientDescentOptimizer
AdagradOptimizer
AdagradDAOptimizer
MomentumOptimizer
AdamOptimizer
FtrlOptimizer
RMSPropOptimizer
The similar prefix of func like ResourceApply...
GradientDescent
Adagrad
AdagradDA
Momentum
Adam
Ftrl
RMSProp.
And they return a option.I don't know what are their purpose. I cant find the relation of them and optimizer.
And how can i make a train in Go by TensorFlow.
What should I should use like python's tf.Variable in Go?

You can't train a Tensorflow model using Go.
The only thing you can do is load a pre-trained model and use it for the inference.
You can't because the Go implementation lacks the Variable support, therefore it's impossible to train anything at the moment.

Related

Does cscope have a query language/api?

I am trying to do some deep code analysis on a python2 codebase that is large and messy enough that most analysis tools I've tried have not worked. I have however been able to use pycscope to generate a cscope database. I can even use it to do basic things like find function usages or all functions called directly from a given function.
To me, it seems like the fact that there is a database and that it can be used for simple things means that it should be possible to use it for more complex things too. For example if I wanted to find all functions that are called from within a given function recursively, or all codepaths that depend on that function.
However cscope documentation is very light and I'm not much of a c-expert. Is there an actual query language for it? An extension that knows how to use the database in this manner?

Fine-tuning pre-trained Word2Vec model with Gensim 4.0

With Gensim < 4.0, we can retrain a word2vec model using the following code:
model = Word2Vec.load_word2vec_format("GoogleNews-vectors-negative300.bin", binary=True)
model.train(my_corpus, total_examples=len(my_corpus), epochs=model.epochs)
However, what I understand is that Gensim 4.0 is no longer supporting Word2Vec.load_word2vec_format. Instead, I can only load the keyedVectors.
How to fine-tune a pre-trained word2vec model (such as the model trained on GoogleNews) with my domain-specific corpus using Gensim 4.0?
I don't think that code would've ever have worked in Gensim versions before 4.0. A plain list-of-word-vectors, like GoogleNews-vectors-negative300.bin, does not (& never has) had enough info to continue training.
It's missing the hidden-to-output layer weights & word-frequency info essential for training.
Looking at past source code, as of release 1.0.0 (February 2017), that code wouldn've already given a deprecation-error with a pointer to the method for loading a plain set-of-word-vectors - to address people with the mistaken notion that could work – and raised other errors on any attempts to train() such a model. (Pre-1.0.0, docs also warned that this would not work, & would have failed with a less-helpful error.)
As one of those errors mentioned, there has at times been experimental support for loading some of a prior set-of-word-vectors to clobber any words in an existing model's already-initialized vocabulary, via .intersect_word2vec_format(). But by default that both (1) locks the imported vectors against further change; (2) brings in no new words. That's unlike what people most often want from "fine-tuning", so it's not a ready-made help for that goal.
I believe some people have cobbled together custom code to achieve various kinds of fine-tuning in their projects – but I don't know of anyone who's published a reliable recipe or strong results. (And I suspect some of the people who think they're doing this well just haven't rigorously evaluated the steps they are taking.)
If you have any recipe you know worked pre-Gensim-4.0.0, it should be adaptable - 4.0 changes to the Word2Vec-related classes were mainly refactorings, optimizations, & new options (with little-to-none removal of functionality). But a reliable description of what used-to-work, or which particular fine-tuning strategy is being pursued for what specific benefits, to make more specific recommendations.

Sampling from user provided target densities in PyMC3

Is it possible to sample from a user provided target measure in PyMC3 in an easy way? I.e. I want to be able to provide black box functions logposterior(theta) and grad_logposterior(theta) that and sample from those instead of specifying a model in PyMC3s modeling language.
This is a bit clunky. You'd need to create a new Theano Op. Here are a few examples: https://github.com/Theano/Theano/blob/master/theano/tensor/slinalg.py#L32
You then need to create a distribution class that evaluates the logp via your new Op, for example: https://github.com/pymc-devs/pymc3/blob/master/pymc3/distributions/continuous.py#L70

Find image in another image using javacv

I want to find an image in another image. I already tried a "template matching" approach, but i didn't know how make it invariant to changes in scale, rotation, perspective, etc.
I have read about feature detection and suspect that usage of sift-features might be the best approach. Beside that i need an implementation of using feature detection using javacv not opencv.
Is there any implementation using feature detection or any other proposal for my problem?
If you understand the basics of JavaCV you can look at the ObjectFinder example of JavaCV.
ObjectFinder # code.google.com
This example shows you, how to do the important steps to solve your problem.
Before using the ObjectFinder you have to call the following method to load the non free modules (e.g. SURF):
com.googlecode.javacv.cpp.opencv_nonfree.initModule_nonfree();
Just for completeness. You can use the image feature matching capabilities provided by opencv, described here. There is even a full implementation of a well working matcher with javacv (in scala though, but it's easily portable into java).

Library system in prolog

I need help with a library system in prolog.
I tried to define all the books in my library this way:
book(['programming in logic'],
[nm(k, clark), nm(f, mcCabe)],
['programation'],
['editorial 123']).
And I tried to do a query for all programming books this way:
?- book(Title,Autgor,Genre,Editorial),
findall( Genre, (member('programation', Genre)), [G]).
but I need to suggest books by genre, author...
I also need to do statistics, most wanted book, genre most searched, author most wanted, things like that, but I'm not sure how to define the rules to do these queries. I have searched for examples, but only find things like family tree and I don't understand. If you could collaborate with examples for this exercise, I would appreciate too much.
For sure, modelling a library system could be a very complex topic.
I would suggest to start to learn RDF, for instance with SWI-Prolog, that has a very powerful library devoted to the task.
I just tried to use RDF to model objects simpler than biblio domain.
Anyway, I googled 'biblio ontology' and got some reasonable result, like bibo.
To start with, maybe you could consider some introductory material.

Resources