I trained my model in Keras with Tensorflow 2.5 (from tensorflow import keras) and I converted it to tensorflow model for inference using AWS SageMaker. The inference result is quite different. I reviewed the libraries and all of them are same version, feature extraction is the same one too. What else can I review?.
Thanks a lot.
One option would be to test inference on your local laptop using SageMaker local (runs the SageMaker TF inference container locally using docker). Then you could troubleshoot it very quickly (seconds to launch the container).
Additionally, you could debug the inference code using PyCharm (this example debugs training, but it could be close enough).
Related
I have come across a library called py_trees which provides me with the classes and functions to create a behavior tree , but I am using windows 10 and I am wondering if ROS(which operates on Ubuntu) is required to use py_trees.
I have gone through the github pages of py_trees and in most of them it has been integrated with ROS and therefore I am wondering if we can use it in windows or not.
No, ROS is not required for py_trees; but it is useful with ROS. There is actually a separate package that specifically integrates it with ROS. It’ll work fine on Windows and can be installed via pip.
Is there a way to import and simulate fmus in Mathematica notebooks?
I have models developed in Modelica and one of the user of the models is proficient in using Mathematica notebooks but new to fmus. I was curious if Mathematica can import fmus similar to Python libraries like fmpy and pyfmi.
As mentioned in the comments, import of FMU is supported in System Modeler, you can get a free trial here,
https://www.wolfram.com/system-modeler/trial/
Once you import the FMU, as indicated here
https://reference.wolfram.com/system-modeler/UserGuide/ModelCenterFMUExportAndImport.html
you can simulate and access the model in Mathematica. When Mathematica and System Modeler with the same version are used in the same session they share a state and models can be modified, simulated, created, etc. in either tool. In Mathematica this is done using the system modeling functionality,
https://reference.wolfram.com/language/tutorial/GettingStartedWithModelSimulationAndAnalysis.html#47330727
More specifically, simulation is done fundamentally through the function SystemModelSimulate
https://reference.wolfram.com/language/ref/SystemModelSimulate.html
I have trained IBM Watson to recognize objects of interest. Since remote execution isn’t a requirement I want to export to .mlmodel with the tool provided and run in macOS.
Unfortunately learning Swift and macOS development isn’t a requirement either. It is possible to invoke Vision directly from the command line or from a scripting language? As alternative anybody knows a skeleton of macOS app to run Vision over a list of files and obtain classification scores in tabular form? Thanks.
The code mentioned in this article uses a downloaded Core ML model in an iOS App through Watson SDK.
Additionally, Here’s a code sample that uses Watson Visual Recognition and Core ML to classify images. The workspace has two projects
Core ML Vision Simple: Classify images locally with Visual Recognition.
Core ML Vision Custom: Train a custom Visual Recognition model for more specialized classification.
Refer the code and instructions here
Also, there’s a starter kit that comes with Watson Visual Recognition preconfigured with Core ML - https://console.bluemix.net/developer/appledevelopment/starter-kits/custom-vision-model-for-core-ml-with-watson
You can also load the mlmodel into Python and use the coremltools package to make predictions. I wouldn't use that in a production environment, but it's OK to get something basic up and running.
I'm working on object detection using dlib, I was going through python implementation. I tested couple of examples from dlib python examples, specially i worked on train_object_detector.py. this works well. Now I would like to train same data model on CNN based object detector mode But i could not find python implementation for training CNN using python, but there is c++ example. (dnn_mmod_ex.cpp). I think it seems i'm missing something or python implementation is not available?
if python implementation is not available, then should i switch to c++ for CNN based object detector training
Yes, use C++ for CNN training. The dlib DNN tooling is meant to be used from C++ and so uses C++11 features that can't be represented in Python.
I'm looking into performing object detection (not just classification) using CNNs; I currently only have access to Windows platforms but can install Linux distributions if necessary. I would like to assess a number of existing techniques, but most available code is for Linux.
I am aware of the following:
Faster RCNN (CNTK, Caffe w/ Matlab)
R-CNN (Matlab with loads of
toolboxes)
R-FCN (Caffe w/ Matlab)
From what I can see, there are no TensorFlow implementations for Windows currently available. Am I missing anything, or do I just need to install Ubuntu if I want to try more?
EDIT: A windows version of YOLO can be found here: https://github.com/AlexeyAB/darknet
There is a Tensorflow implementation for Windows but honestly it's always 1-2 steps behind Linux.
One thing I would add to your list (which was not already mentioned) is MaskRCNN
You can also look for some implementation of it for Tensorflow like this one: https://github.com/CharlesShang/FastMaskRCNN