How to use PNG mask in segmentation object detection with Tensorflow - windows

I am using tutorial from here.
I use mask_rcnn_inception_v2 detection model with my own dataset. I want to add PNG mask, i use some applications to do it. but I wonder how i put this data to be used in detection. I see the mention anywhere.
How to implement the PNG mask in object detection ? (where i put it, how to use it)
Do you know how to launch the evaluation and training in same time on tensorboard i see it is possible.
generally where i can ask all Tensorflow general question as configuration file explanations
On Github Tensorflow it is specified we have to ask question here because not a Tensorflow issue and great community here with some great guys!

thanks to guy on github he points a missing configuration in pipeline.config
number_of_stages: 3
and it changes all the result i can see the mask now. youpi !

For any further information, there's a good explanation here:
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/instance_segmentation.md
It explains how to prepare you masks and what to modify.

Related

Globbing a bunch of .pkl files into one, then converting to .fbx (on Windows)

I'm experimenting with the beautiful frankmocap, feeding a video and getting a quite accurate hands and body tracking. This tool also outputs a .pkl file (which I'm not familiar with) for each frame.
I'd like to convert these files into a usable 3D file but 1. I've discovered I can't use glob.h with ffmpeg on Windows and 2. I cant' get them converted in .fbx.
Along with frankmocap, I've tried VIBE but I still end up with the same problem.
Using miniconda3.
I hope someone can help me! Thank you for your time.
Someone has taken a useful script made for VIBE and adapted it to work with FrankMocap keypoints.
The script takes a folder of PKL frames generated by FrankMocap and then uses Blender to animate them into an FBX rig. It doesn't currently include the hands though, which is something I'm currently trying to solve in it myself (and how I found your question).
Link to the script: https://github.com/facebookresearch/frankmocap/files/5750266/fbx_output_FRANKMOCAP.zip

2D distributions in the HistFactory?

How can I specify in the construction of the HistFactory the signal and background to be 2-dimensional distributions?
I have understood than in RooStats you need to change the TH1 to a TH2.
At the moment to write my model in the json file can I use a ndarray to do something similar?.
Which is the correct way to do this?
I hope someone can help me and thank you in advance.
Currently the best way is to unroll the distributions e.g.
{'data': 2darray.ravel().tolist()}
Since mathematically it doesn't make any difference.
If you want to convert from XML+ROOT this is not yet supported (but could be). If so, please open an issue on GitHub.
Thanks for using pyhf!

Picture processing program, art generator

I've found a really amazing wallpaper and later found out that it was a modified version of the original picture (which is an album cover).
Is anyone aware of the program or script being used in order to produce such kind of picture given any input?
Original picture:
Processed picture:
Zoom to the processed picture to see the style:
Is anyone aware of the program or script being used in order to produce such kind of picture given any input?
I am aware of one computer vision area called "Style transfer" using deep neural networks. There is this famous paper on this topic.
A few GitHub repos:
https://github.com/lengstrom/fast-style-transfer
https://github.com/fzliu/style-transfer
https://github.com/anishathalye/neural-style
i would say it can be done eadily by script. I am building an art generator called the BoredArt Club and using PHP GDlib for image manip. But if I wanted to use imageMagicK i could do more and it has a windows commandline option.
as for online tools... look at LunaPic and imgonline (UA i think)... they have masking tools and textures, etc fun stuff
I have a bash shell script that uses ImageMagick that will do something similar. See my script, weave
Here is an example:
Input:
Processed:

What is a good link to examples of enaml being used with traits and matplotlib?

I have done GUI construction but not in Python. From other stack exchange questions and my own investigation. It looks like I want to use enaml and traits for the bulk of this work. Are there any links or references to help me get started.
This is a scientific application integrating matplotlib plots and text boxes and buttons (Very simple I think). I have gone through this example but don't understand it too well http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html
I have also gone through the Enthough Chaco examples and don't get very far. Has somebody built a program that I could run and look at their code? Or is their a repository of examples I am not aware of? I found the enaml examples but the example with matplotlib is basic and does not show me how to connect my algorithms to the plots. Thanks in advance!
Not a full answer, but for additional context:
1) Use https://github.com/nucleic/enaml, along with https://github.com/enthought/traits-enaml
2) Example:
https://github.com/nucleic/enaml/blob/master/examples/widgets/mpl_canvas.enaml

Feature Extraction from Images to use with LIBSVM

I'm really stuck right now. I want to apply LIBSVM for Image Classification. I captured lots of Training-Images (BITMAP-Format), from which I want to extract features.
The Training-Images contain people who are lying on the floor. The classifier should decide if there is a person lying on the floor or not in the given Image.
I read lots of papers, documentary, guides and tutorials, but in none of them is documented how to get a LIBSVM-Package. The only thing that is described is how to convert a LIBSVM-Package from a CSV-File like this one: CSV-File. On the LIBSVM-Website several Example-Data can be downloaded. The Example-Data is either prepared as CSV-Files or as ready-to-use Training- and Testdata.
If you look at the Values which are in the CSV-File, the first column are the labels (lying person or not) and the other Values are the extracted features, but I still can't reconstruct how those values are achieved.
I don't know if it's that simple that nobody has to mention it, but I just can't get trough it, so if anybody knows how to perform the feature extraction from Images, please help me.
Thank you in advance,
Regards
You need to do feature extraction first. There are many methods that are available. These include LBP,Gabor and many more.. These methods will help you get the features to input into libsvm..Hope this helps...

Resources