Pattern Image Processing - image

I am want to make a simple application that will detect a patterns on the wall like the image below.
So the patterns will be pasted on the wall. The camera will rotate around 360 degrees and identify the pattern.
I asked someone I know in the EEE field, and he said that i could use OpenCV. But he said OpenCV can only recognize 1 pattern only. Is this true.
I am new to image processing. I hope someone can advice me on how i should approach this project. If there are any valuable reference, please share. Your help would be much appreciated.

Yes it true but not quite. You need only one pattern on image to use methods like Surf. But you can use contour analysis to recognize pattern like your image. Also you can use AdaBoost to find your patterns if they are more complexity.
OpenCv is only library and have some methods to image processing. You can use what suits you best.
There are many tutorials about AdaBoost, Surf/Sift/Orb/Brisk... Contour analysis is more complex.
Good Luck!

Related

Removing skew/distortion based on known dimensions of a shape

I have an idea for an app that takes a printed page with four squares in each corner and allows you to measure objects on the paper given at least two squares are visible. I want to be able to have a user take a picture from less than perfect angles and still have the objects be measured accurately.
I'm unable to figure out exactly how to find information on this subject due to my lack of knowledge in the area. I've been able to find examples of opencv code that does some interesting transforms and the like but I've yet to figure out what I'm asking in simpler terms.
Does anyone know of papers or mathematical concepts I can lookup to get further into this project?
I'm not quite sure how or who to ask other than people on this forum, sorry for the somewhat vague question.
What you describe is very reminiscent of augmented reality marker tracking. Maybe you can start by searching these words on a search engine of your choice.
A single marker, if done correctly, can be used to identify it without confusing it with other markers AND to determine how the surface is placed in 3D space in front of the camera.
But that's all very difficult and advanced stuff, I'd greatly advise to NOT try and implement something like this, it would take years of research... The only way you have is to use a ready-made open source library that outputs the data you need for your app.
It may even not exist. In that case you'll have to buy one. Given the niché of your problem that would be perfectly plausible.
Here I give you only the programming aspect and if you want you can find out about the mathematical aspect from those examples. Most of the functions you need can be done using OpenCV. Here are some examples in python:
To detect the printed paper, you can use cv2.findContours function. The most outer contour is possibly the paper, but you need to test on actual images. https://docs.opencv.org/3.1.0/d4/d73/tutorial_py_contours_begin.html
In case of sloping (not in perfect angle), you can find the angle by cv2.minAreaRect which return the angle of the contour you found above. https://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.html (part 7b).
If you want to rotate the paper, use cv2.warpAffine. https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_geometric_transformations/py_geometric_transformations.html
To detect the object in the paper, there are some methods. The easiest way is using the contours above. If the objects are in certain colors, you can detect it by using color filter. https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.html

Image segmentation for yolo

For a project I am using YOLO to detect phallusia (microbial organisms) that swim into focus in a video. The issue is that I have to train YOLO on my own data. The data needs to be segmented so I can isolate the phallusia. I am not sure how to properly segment/cut-out the phallusia to fit the format that YOLO needs. For example in the picture below I want YOLO to detect when a phallusia is in focus similar to the one I have boxed in red. Do I just cut-out that segment of the image and save it as its own image and feed to that to YOLO? Do all segmented images need to have the same dimensions? Not sure what I am doing and could use some guidance.
It looks like you need to start from basics, ok, no fear. I will try to suggest a simple route to start efficiently to use YOLO techniques. Luckly the web has a lot of examples.
Understand WHAT is a YOLO method.
Andrew NG's YOLO explanation is a good start, but only if you alread know what are classification and detection.
Understand the YOLO Loss function, the heart of the algorithm.
Check the paper YOLO itself, don't be scared. At page #2, in Unified Detection section, you will find the information about the bounding box detection used, but be aware that you can use whatever notation you want (even invent a new one), in order to be compatible with the Loss function, real meaning of this algorithm.
Start to implement an example
As I wrote above, there are plenty of examples. You can check this one if you are familiar with python and tensorflow.
Inside it you will find a way to prepare the dataset, that is your target for this question, I think. In this case a tool named labelImg is used.
I hope it will be useful. Please share your code when it will be ready, I'm curious :). Good luck!
Do I just cut-out that segment of the image and save it as its own image and feed to that to YOLO?
You need as much images as you can get of your microbial organism, in different sizes, positions, etc. It doesn't need to be the only thing on the image, but you need to know the <x> <y> <width> <height> position of it.
Do all segmented images need to have the same dimensions?
No, they can be of any size and Yolo adapts them. See the VOC dataset for examples of images Yolo is normally trained on. A couple examples;
kitchen, dogs
Not sure what I am doing and could use some guidance.
My advice would be to follow the instructions for "Training YOLO on VOC" from the original Yolo website; https://pjreddie.com/darknet/yolo/
Once you have that working, you will have a better idea of the steeps you need to take.
I had similar problems when I wanted to train YOLOv2 for some game cards.
In order to solve the problem I took a picture from every game card with my cellphone and I cut out them. Because I didn't have enough training data I wrote a dataset generator program what generated the training data by using the photos from the cards. This program is able to multiply, rotate, scale the image then to place it on a background.
It can happen that you will have problems if you don't have enough learning data. In this case don't panic, because from several raw images by rotating and scaling you can generate a large dataset.
Here you can find my dataset generator, which is able to generate Pascal VOC style and darknet style training data: https://github.com/szaza/dataset-generator. Feel free to reuse it, if you need something similar.

Correlation between two image(binary image)

I have two binary image like this. I have a data set with lots of picture like at the bottom but with differents signs.
and
I would like to compare them in order to know if it's the same figure or not (especially inside the triangle). I took a look in Sift and Surf feature but it's doesn't work well on this type of picture (it find matchning point whereas the two picture are different,especially inside).
I also hear about SVM but i don't know if i have to implement it for this type of problem.
Do you have an idea ?
Thank you
I think you should not use SURF features on the binary image as you have already discarded a lot of information at that stage with your edge detector.
You could also use the Linear or Circle Hough Transform that in this case could tell you a lot about image differences.
If you wat to find 2 exactly identical images, simply use hash functions like md5.
But if you want to find related ( not exatcly identical) images, you are running in trouble ;). look for artificial neural network libs...

Simple morphing animation between two images

I'm looking to implement a simple morphing animation between two images.
Here's a simple demo of what I'm trying to create: http://i.imgur.com/7377yHr.gif
I'm pretty comfortable with Objective-C and JavaScript but since the concepts and algorithms are abstract, I'm more than willing to see examples in any language or framework.
I would like to know how hard it would be to tackle this -- it doesn't have to be exact but as long as it gives the impression of a morph I'll be satisfied.
Where would I start?
It seems like in your example is being used a combination of mesh wrap morphing and cross dissolve morphing. Mesh morphing can be tricky and as far as I know it requires a manual input (defining the mesh), so depending on what you want to do it might not be suitable for you.
If you are looking for a cheap technique (in terms of effort), probably just doing cross dissolve would work for you, since is very easy to implement. You just need to combine both images by increasing the alpha of the target image and decreasing the alpha on the origin image.
These articles give an overview of the techniques:
[PDF] http://css1a0.engr.ccny.cuny.edu/~wolberg/pub/vc98.pdf
[PDF] http://www.sorging.ro/en/member/serveFile/format/pdf/slug/image-morphing-techniques
[PDF] http://cs.haifa.ac.il/hagit/courses/ip/Lectures/Ip05_GeomOper.pdf
The last link comes from a comment in a similar question: Morphing, 3 algorithms, image processing

Morphing 2 faces images

I would like some help from the aficionados of openCV here.
I would like to know the direction to take (and some advices or piece of code) on how to morph 2 faces together with a kind of ratio saying 10% of the first and 90% of the second.
I have seen functions like cvWarpAffine and cvMakeScanlines but I am not sure how to use them.
So if somebody could help me here, I'll be very grateful.
Thanks in advance.
Unless the images compared are the exact same images, you would not go very far with this.
This is an artificial intelligence problem and needs to be solved as such. Typical solution involves:
Normalising the data (removing noise, skew, ...) from the images
Feature extraction (turn the image into a smaller set of data)
Use a machine learning (typically classifiers) to train the data with your matches
Test the result
Refine previous processes according to the results until you get good recognition
The choice of OpenCV functions used depends on your feature extraction method. Have a look at Eigenface.

Resources