I have learnt a neural net based model using TensorFlow in Python.
I would like to store this model in a file and be able to load it into
memory in a C++ program for prediction later.
I am doing a comparative study of my machine learning model versus a standard algorithm written in C++. For this reason, I would like to load the model and do the prediction in C++ since I don't want the internals of the programming language to cause differences in the runtimes of the implementations.
Are there other ways to keep the comparisons language-neutral?
Yes. I think you can do it using bazel (google's tool for TF). Once you make sure that you are saving checkpoints, build the project. It would create executable file for you to use in c++.
I have a network and I am selecting the framework to implement: Keras, Tf-Slim or Tensorflow.
My question is that does the performance (accuracy) decrease when we use Keras/Tf-slim instead of native Tensorflow? I found that the time may be lower when we use Keras (Native TF vs Keras TF performance comparison), but how about performance?
There should not be any difference in accuracy while using keras(Tf backend)/Tf-slim in comparison to native Tf as they are just a high-level library written on top on native Tf.
Question about TensorFlow:
I was looking at the video and model on the site, and it appeared to only have SGD as an algorithm for machine learning. I was wondering if other algorithms are also included in tensorflow, such as L-BFGS.
Thank you for your responses.
TensorFlow's jargon for the algorithms such as Stochastic Gradient Descent (SGD) is optimizer. Following are the optimizers supported by TensorFlow:
GradientDescentOptimizer
AdadeltaOptimizer
AdagradOptimizer
AdamOptimizer
FtrlOptimizer
MomentumOptimizer
RMSPropOptimizer
You can also use the TensorFlow SciPy Optimizer interface which gives you access to optimizers like the L-BFGS.
Further, here and here are all the available TensorFlow optimizers you could use.
PyMC comes with numerous examples but LDA, which is a relatively simple graphical model, is not one of them. There are questions on numerous sites about this but never any references to implementations. I've considered how it might be implemented but it's not clear how PyMC would be used to establish the topic-word dependencies within each document.
Can LDA be implemented relatively easily with PyMC?
I tried to implement and explore the solution provided in the link by Tom Minka and found that it gives some nice results. However, I am yet to find with 100% confidence that this implementation is the correct LDA implementation.
The ipython notebook can be viewed at: https://github.com/napsternxg/ipython-notebooks/blob/master/PyMC_LDA.ipynb
I'm following the Stanford Machine Learning class with prof. Andrew Ng and I would like to start implementing the examples in ruby.
Are there any frameworks/gems/libs/existing code out there that approaches machine learning in ruby? I have found some questions related to this and some projects but seem to be quite old.
The algorithms themselves are not language specific. You can implement them using any language you want. For maximum efficiency you will want to use matrix/vector based computing.
Ruby has a built in Matrix class that you can use to implement these algorithms. The implementation will be very similar to the one using Octave. Everything you need to implement the algorithms yourself is included in the base Standard Library for 1.9+.
Octave is used because it provides a thorough and easy Matrix library.
Be sure to check this gist, it has plenty of information:
Resources for Machine Learning in Ruby
Moreover, following are some noteworthy algorithms libraries (which might or might not be already listed in the gist above):
AI4R
http://www.ai4r.org/ - https://github.com/SergioFierens/ai4r
AI4R is a collection of ruby algorithm implementations, covering several Artificial intelligence fields, and simple practical examples using them. A Ruby playground for AI researchers. It implements:
Genetic algorithms
Self-organized maps (SOM)
Neural Networks: Multilayer perceptron with Backpropagation learning, Hopfield net.
Automatic classifiers (Machine Learning): ID3 (Decision Trees), PRISM (J. Cendrowska, 1987), Multilayer Perceptron, OneR (AKA One Attribute Rule, 1R), ZeroR, Hyperpipes, Naive Bayes, IB1 (D. Aha, D. Kibler - 1991).
Data clustering: K-means, Bisecting k-means, Single linkage, Complete linkage, Average linkage, Weighted Average linkage, Centroid linkage, Median linkage, Ward's method linkage, Diana (Divisive Analysis)
kmeans-clusterer - k-means clustering in Ruby:
https://github.com/gbuesing/kmeans-clusterer
kmeans-clustering - A simple Ruby gem for parallelized k-means clustering:
https://github.com/vaneyckt/kmeans-clustering
tlearn-rb - Recurrent Neural Network library for Ruby:
https://github.com/josephwilk/tlearn-rb
TensorFlow Ruby wrapper - As of this writing it seems that work is about to begin in building a TensorFlow Ruby API:
https://github.com/tensorflow/tensorflow/issues/50#issuecomment-216200945
If JRuby is a viable alternative to Ruby for you:
weka-jruby - Machine Learning & Data Mining with JRuby based on the Weka Java library:
https://github.com/paulgoetze/weka-jruby
jruby_mahout - JRuby Mahout is a gem that unleashes the power of Apache Mahout in the world of JRuby:
https://github.com/vasinov/jruby_mahout
UPDATE: the Resources for Machine Learning in Ruby gist above is now starting to be mantained as a repository: https://github.com/arbox/machine-learning-with-ruby
Try Rumale and Numo::NArray
https://github.com/yoshoku/rumale
Rumale (Ruby machine learning) is a machine learning library in Ruby. Rumale provides machine learning algorithms with interfaces similar to Scikit-Learn in Python. Rumale supports Linear / Kernel Support Vector Machine, Logistic Regression, Linear Regression, Ridge, Lasso, Factorization Machine, Naive Bayes, Decision Tree, AdaBoost, Gradient Tree Boosting, Random Forest, Extra-Trees, K-nearest neighbor classifier, K-Means, K-Medoids, Gaussian Mixture Model, DBSCAN, Power Iteration Clustering, Mutidimensional Scaling, t-SNE, Principal Component Analysis, and Non-negative Matrix Factorization.