So basically I need an Animation where it cycles between two pictures i have (lets call them picA and picB). I don't really understand the animategraphics command in LaTeX. How would I implement it or can someone link a description of the method with all its parameters?
Sorry for this fairly idiotic question but i can't seem to find what I want online.
Related
For a project I am using YOLO to detect phallusia (microbial organisms) that swim into focus in a video. The issue is that I have to train YOLO on my own data. The data needs to be segmented so I can isolate the phallusia. I am not sure how to properly segment/cut-out the phallusia to fit the format that YOLO needs. For example in the picture below I want YOLO to detect when a phallusia is in focus similar to the one I have boxed in red. Do I just cut-out that segment of the image and save it as its own image and feed to that to YOLO? Do all segmented images need to have the same dimensions? Not sure what I am doing and could use some guidance.
It looks like you need to start from basics, ok, no fear. I will try to suggest a simple route to start efficiently to use YOLO techniques. Luckly the web has a lot of examples.
Understand WHAT is a YOLO method.
Andrew NG's YOLO explanation is a good start, but only if you alread know what are classification and detection.
Understand the YOLO Loss function, the heart of the algorithm.
Check the paper YOLO itself, don't be scared. At page #2, in Unified Detection section, you will find the information about the bounding box detection used, but be aware that you can use whatever notation you want (even invent a new one), in order to be compatible with the Loss function, real meaning of this algorithm.
Start to implement an example
As I wrote above, there are plenty of examples. You can check this one if you are familiar with python and tensorflow.
Inside it you will find a way to prepare the dataset, that is your target for this question, I think. In this case a tool named labelImg is used.
I hope it will be useful. Please share your code when it will be ready, I'm curious :). Good luck!
Do I just cut-out that segment of the image and save it as its own image and feed to that to YOLO?
You need as much images as you can get of your microbial organism, in different sizes, positions, etc. It doesn't need to be the only thing on the image, but you need to know the <x> <y> <width> <height> position of it.
Do all segmented images need to have the same dimensions?
No, they can be of any size and Yolo adapts them. See the VOC dataset for examples of images Yolo is normally trained on. A couple examples;
kitchen, dogs
Not sure what I am doing and could use some guidance.
My advice would be to follow the instructions for "Training YOLO on VOC" from the original Yolo website; https://pjreddie.com/darknet/yolo/
Once you have that working, you will have a better idea of the steeps you need to take.
I had similar problems when I wanted to train YOLOv2 for some game cards.
In order to solve the problem I took a picture from every game card with my cellphone and I cut out them. Because I didn't have enough training data I wrote a dataset generator program what generated the training data by using the photos from the cards. This program is able to multiply, rotate, scale the image then to place it on a background.
It can happen that you will have problems if you don't have enough learning data. In this case don't panic, because from several raw images by rotating and scaling you can generate a large dataset.
Here you can find my dataset generator, which is able to generate Pascal VOC style and darknet style training data: https://github.com/szaza/dataset-generator. Feel free to reuse it, if you need something similar.
I am trying to read text from a t-Shirt image.
I tried tesserect but i dont get the proper text.
There seems to be lots of ways to read from a white/constant background.. but not from any random scene..
What is the recommended way to achieve this?
In this task you are facing same challenges as any other OCR project dealing with unconventional media (not paper scans) - variable colors, text on backgrounds, distortions in print surface, various angles, fonts, etc. Each potential aspect that affects OCR quality requires a separate methodology and process how to handle it, and when they all work together, good results can be achieved.
This Blog posting I wrote some time ago is a good place to start. It addresses nearly every issue you may face in your task.
http://www.ocr-it.com/user-scenario-process-digital-camera-pictures-and-ocr-to-extract-specific-numbers
Take a look at other articles in the same Blog.
Sample image would definitely help to answer with more specifics.
live2d can animate a picture and make small movements, just as they show in this video.
1 Does anyone know how it works?
2 Is there any paper describe the mechanism behind it? I tried google scholar search but find few.
3 Is there any open source work on this field?
The fundamental algorithm is like control points in Adobe Illustrator. Control points are like anchors for the image. You can shrink, stretch and bend the image by moving the control points.
Unfortunately, NO. Live2D is all developed in the company.
For now it is all closed project.
I am want to make a simple application that will detect a patterns on the wall like the image below.
So the patterns will be pasted on the wall. The camera will rotate around 360 degrees and identify the pattern.
I asked someone I know in the EEE field, and he said that i could use OpenCV. But he said OpenCV can only recognize 1 pattern only. Is this true.
I am new to image processing. I hope someone can advice me on how i should approach this project. If there are any valuable reference, please share. Your help would be much appreciated.
Yes it true but not quite. You need only one pattern on image to use methods like Surf. But you can use contour analysis to recognize pattern like your image. Also you can use AdaBoost to find your patterns if they are more complexity.
OpenCv is only library and have some methods to image processing. You can use what suits you best.
There are many tutorials about AdaBoost, Surf/Sift/Orb/Brisk... Contour analysis is more complex.
Good Luck!
I want to make a puzzle game in XNA where I take a picture and divide it into several parts and the user has to get all the parts back in the right order. The thing is I am new to XNA and WP programming and I have googled a lot and still I can't find any good links that help me with what I am trying to do. I just want to know how can I split the texture into different parts. Could anyone help me by providing links if they have them? Or guide me as to how this can be done?
Thanks
If you want square pieces, you can have multiple pieces with the same image (full puzzle). If you edit the "source" parameter when you draw your puzzle pieces, you can render only a part of the image, thus making different pieces.