Im looking to create a program that detects HEADS with opencv, not just faces. There must be a way to do this. Besides heads, I need to identify the most high pixel of the head (the top part of the hair) and the low center point of the chin... I'm not finding any OS X OpenCV examples.. Here is a picture of what I'd like to achieve...https://pasteboard.co/GF19Fao.jpg
Looks simple enough right?
I would recommend looking into haarcascades for this.
This consists of a couple cascades not included in OpenCV.
Method 1:
The profile and frontal face ones are a good starting point.
Personally, I have not tested them yet so it is hard for me to tell you what the results look like.
If the final bounding box fits perfectly over the entire head alone, then you can make the following assumptions as a starting point
if box = (topLeft, topRight, bottomLeft, bottomRight)
then hairTop = distance(topLeft, topRight)/2
and chin = distance(bottomLeft, bottomRight)/2
If not you could do two things:
1. Make some measurement readjustments to see how far up or down to move the resulting rectangle in order to find the chin and top hair.
2. You could use a combination of other classifiers as well.
Combine the results of the front/profile face with the mouth classifier to find the chin
Combine the front/profile face with upperbody classifier to find the top hair.
Method 2
You could also use the front/profile face classifiers to find the top hair and just use mouth to find the chin.
Either methods require that you run multiple tests to find the optimal values/estimates that fulfill your task.
Related
I have a problem finding defects at the edge of a circular object. It's hard to describe so I have a picture which may help a bit. I am trying to find the red marked areas, such as what is shown below:
I already tried matching with templates vision.TemplateMatcher(), but this only works well for the picture I made the template of.
I tried to match it with vision.CascadeObjectDetector() and I trained it with 150 images. I found only < 5% correct results with this.
I also tried matching with detectSURFFeatures() and then matchFeatures(), but this only works on quite similar defects (when the edges are not closed it fails).
Since the defects are close to the half of a circle, I tried to find it with imfindcircles(), but there I find so many possible results. When I take the one with the highest metric sometimes I get the right one but not even close to 30%.
Do any of you have an idea what I can try to find at least more than 50%?
If someone has an idea and wants to try something I added another picture.
Since I am new I can only add two pictures but if you need more I can provide more pictures.
Are you going to detect rough edges like that on smooth binary overlays as you provided before? For eg. are you making a program whose input consists of getting a black image with lots of circles with rough edges which its then supposed to detect? i.e. sudden rough discontinuities in a normally very smooth region.
If the above position is valid, then this may be solved via classical signal processing. My opinion, plot a graph of the intensity on a line between any two points outside and inside the circle. It should look like
.. continuous constant ... continuous constant .. continuous constant.. DISCONTINUOUS VARYING!! DISCONTINUOUS VARYING!! DISCONTINUOUS VARYING!! ... continuous constant .. continuous constant..
Write your own function to detect these discontinuities.
OR
Gradient: The rate of change of certain quantities w.r.t a distance measure.
Use the very famous Sobel (gradient) filter.
Use the X axis version of the filter, See result, if gives you something detectable use it, do same for Y axis version of filter.
In case you're wondering, if you're using Matlab then you just need to get a readily available and highly mentioned 3x3 matrix (seen almost everywhere on the internet ) and plug it into the imfilter function, or use the in-built implementation (edge(image,'sobel')) (if you have the required toolbox).
I am trying to count the number of hairs transplanted in the following image. So practically, I have to count the number of spots I can find in the center of image.
(I've uploaded the inverted image of a bald scalp on which new hairs have been transplanted because the original image is bloody and absolutely disgusting! To see the original non-inverted image click here. To see the larger version of the inverted image just click on it). Is there any known image processing algorithm to detect these spots? I've found out that the Circle Hough Transform algorithm can be used to find circles in an image, I'm not sure if it's the best algorithm that can be applied to find the small spots in the following image though.
P.S. According to one of the answers, I tried to extract the spots using ImageJ, but the outcome was not satisfactory enough:
I opened the original non-inverted image (Warning! it's bloody and disgusting to see!).
Splited the channels (Image > Color > Split Channels). And selected the blue channel to continue with.
Applied Closing filter (Plugins > Fast Morphology > Morphological Filters) with these values: Operation: Closing, Element: Square, Radius: 2px
Applied White Top Hat filter (Plugins > Fast Morphology > Morphological Filters) with these values: Operation: White Top Hat, Element: Square, Radius: 17px
However I don't know what to do exactly after this step to count the transplanted spots as accurately as possible. I tried to use (Process > Find Maxima), but the result does not seem accurate enough to me (with these settings: Noise tolerance: 10, Output: Single Points, Excluding Edge Maxima, Light Background):
As you can see, some white spots have been ignored and some white areas which are not actually hair transplant spots, have been marked.
What set of filters do you advise to accurately find the spots? Using ImageJ seems a good option since it provides most of the filters we need. Feel free however, to advise what to do using other tools, libraries (like OpenCV), etc. Any help would be highly appreciated!
I do think you are trying to solve the problem in a bit wrong way. It might sound groundless, so I'd better show my results first.
Below I have a crop of you image on the left and discovered transplants on the right. Green color is used to highlight areas with more than one transplant.
The overall approach is very basic (will describe it later), but still it provides close to be accurate results. Please note, it was a first try, so there is a lot of room for enhancements.
Anyway, let's get back to the initial statement saying you approach is wrong. There are several major issues:
the quality of your image is awful
you say you want to find spots, but actually you are looking for hair transplant objects
you completely ignores the fact average head is far from being flat
it does look like you think filters will add some important details to your initial image
you expect algorithms to do magic for you
Let's review all these items one by one.
1. Image quality
It might be very obvious statement, but before the actual processing you need to make sure you have best possible initial data. You might spend weeks trying to find a way to process photos you have without any significant achievements. Here are some problematic areas:
I bet it is hard for you to "read" those crops, despite the fact you have the most advanced object recognition algorithms in your brain.
Also, your time is expensive and you still need best possible accuracy and stability. So, for any reasonable price try to get: proper contrast, sharp edges, better colors and color separation.
2. Better understanding of the objects to be identified
Generally speaking, you have a 3D objects to be identified. So you can analyze shadows in order to improve accuracy. BTW, it is almost like a Mars surface analysis :)
3. The form of the head should not be ignored
Because of the form of the head you have distortions. Again, in order to get proper accuracy those distortions should be corrected before the actual analysis. Basically, you need to flatten analyzed area.
3D model source
4. Filters might not help
Filters do not add information, but they can easily remove some important details. You've mentioned Hough transform, so here is interesting question: Find lines in shape
I will use this question as an example. Basically, you need to extract a geometry from a given picture. Lines in shape looks a bit complex, so you might decide to use skeletonization
All of a sadden, you have more complex geometry to deal with and virtually no chances to understand what actually was on the original picture.
5. Sorry, no magic here
Please be aware of the following:
You must try to get better data in order to achieve better accuracy and stability. The model itself is also very important.
Results explained
As I said, my approach is very simple: image was posterized and then I used very basic algorithm to identify areas with a specific color.
Posterization can be done in a more clever way, areas detection can be improved, etc. For this PoC I just have a simple rule to highlight areas with more than one implant. Having areas identified a bit more advanced analysis can be performed.
Anyway, better image quality will let you use even simple method and get proper results.
Finally
How did the clinic manage to get Yondu as client? :)
Update (tools and techniques)
Posterization - GIMP (default settings,min colors)
Transplant identification and visualization - Java program, no libraries or other dependencies
Having areas identified it is easy to find average size, then compare to other areas and mark significantly bigger areas as multiple transplants.
Basically, everything is done "by hand". Horizontal and vertical scan, intersections give areas. Vertical lines are sorted and used to restore the actual shape. Solution is homegrown, code is a bit ugly, so do not want to share it, sorry.
The idea is pretty obvious and well explained (at least I think so). Here is an additional example with different scan step used:
Yet another update
A small piece of code, developed to verify a very basic idea, evolved a bit, so now it can handle 4K video segmentation in real-time. The idea is the same: horizontal and vertical scans, areas defined by intersected lines, etc. Still no external libraries, just a lot of fun and a bit more optimized code.
Additional examples can be found on YouTube: RobotsCanSee
or follow the progress in Telegram: RobotsCanSee
I've just tested this solution using ImageJ, and it gave good preliminary result:
On the original image, for each channel
Small (radius 1 or 2) closing in order to get rid of the hairs (black part in the middle of the white one)
White top-hat of radius 5 in order to detect the white part around each black hair.
Small closing/opening in order to clean a little bit the image (you can also use a median filter)
Ultimate erode in order to count the number of white blob remaining. You can also certainly use a LoG (Laplacian of Gaussian) or a distance map.
[EDIT]
You don't detect all the white spots using the maxima function, because after the closing, some zones are flat, so the maxima is not a point, but a zone. At this point, I think that an ultimate opening or an ultimate eroded would give you the center or each white spot. But I am not sure that there is a function/pluggin doing it in ImageJ. You can take a look to Mamba or SMIL.
A H-maxima (after white top-hat) may also clean a little bit more your results and improve the contrast between the white spots.
As Renat mentioned, you should not expect algorithms to do magic for you, however I'm hopeful to come up with a reasonable estimate of the number of spots. Here, I'm going to give you some hints and resources, check them out and call me back if you need more information.
First, I'm kind of hopeful to morphological operations, but I think a perfect pre-processing step may push the accuracy yielded by them dramatically. I want you put my finger on the pre-processing step. Thus I'm going ti work with this image:
That's the idea:
Collect and concentrate the mass around the spot locations. What do I mean my concentrating the masses? Let's open the book from the other side: As you see, the provided image contains some salient spots surrounded by some noisy gray-level dots.
By dots, I mean the pixels that are not part of a spot, but their gray-value are larger than zero (pure black) - which are available around the spots. It is clear that if you clear these noisy dots, you surely will come up with a good estimate of spots using other processing tools such as morphological operations.
Now, how to make the image more sharp? What if we could make the dots to move forward to their nearest spots? This is what I mean by concentrating the masses over the spots. Doing so, only the prominent spots will be present in the image and hence we have made a significant step toward counting the prominent spots.
How to do the concentrating thing? Well, the idea that I just explained is available in this paper, which its code is luckily available. See the section 2.2. The main idea is to use a random walker to walk on the image for ever. The formulations is stated such that the walker will visit the prominent spots far more times and that can lead to identifying the prominent spots. The algorithm is modeled Markov chain and The equilibrium hitting times of the ergodic Markov chain holds the key for identifying the most salient spots.
What I described above is just a hint and you should read that short paper to get the detailed version of the idea. Let me know if you need more info or resources.
That is a pleasure to think on such interesting problems. Hope it helps.
You could do the following:
Threshold the image using cv::threshold
Find connected components using cv::findcontour
Reject the connected components of size larger than a certain size as you seem to be concerned about small circular regions only.
Count all the valid connected components.
Hopefully, you have a descent approximation of the actual number of spots.
To be statistically more accurate, you could repeat 1-4 for a range of thresholds and take the average.
This is what you get after applying unsharpen radius 22, amount 5, threshold 2 to your image.
This increases the contrast between the dots and the surrounding areas. I used the ballpark assumption that the dots are somewhere between 18 and 25 pixels in diameter.
Now you can take the local maxima of white as a "dot" and fill it in with a black circle until the circular neighborhood of the dot (a circle of radius 10-12) erases the dot. This should let you "pick off" the dots joined to each other in clusters more than 2. Then look for local maxima again. Rinse and repeat.
The actual "dot" areas are in stark contrast to the surrounding areas, so this should let you pick them off as well as you would by eyeballing it.
I want to remove background and get deer as a foreground image.
This is my source image captured by trail camera:
This is what I want to get. This output image can be a binary image or RGB.
I worked on it and try many methods to get solution but every time it failed at specific point. So please first understand what is my exact problem.
Image are captured by a trail camera and camera is motion detector. when deer come in front of camera it capture image.
Scene mode change with respect to weather changing or day and night etc. So I can't use frame difference or some thing like this.
Segmentation may be not work correctly because Foreground (deer) and Background have same color in many cases.
If anyone still have any ambiguity in my question then please first ask me to clear and then answer, it will be appreciated.
Thanks in advance.
Here's what I would do:
As was commented to your question, you can detect the dear and then perform grabcut to segment it from the picture.
To detect the dear, I would couple a classifier with a sliding window approach. That would mean that you'll have a classifier that given a patch (can be a large patch) in the image, output's a score of how much that patch is similar to a dear. The sliding window approach means that you loop on the window size and then loop on the window location. For each position of the window in the image, you should apply the classifier on that window and get a score of how much that window "looks like" a dear. Once you've done that, threshold all the scores to get the "best windows", i.e. the windows that are most similar to a dear. The rational behind this is that if we a dear is present at some location in the image, the classifier will output a high score at all windows that are close/overlap with the actual dear location. We would like to merge all that locations to a single location. That can be done by applying the functions groupRectangles from OpenCV:
http://docs.opencv.org/modules/objdetect/doc/cascade_classification.html#grouprectangles
Take a look at some face detection example from OpenCV, it basically does the same (sliding window + classifier) where the classifier is a Haar cascade.
Now, I didn't mention what that "dear classifier" can be. You can use HOG+SVM (which are both included in OpenCV) or use a much powerful approach of running a deep convulutional neural network (deep CNN). Luckily, you don't need to train a deep CNN. You can use the following packages with their "off the shelf" ImageNet networks (which are very powerful and might even be able to identify a dear without further training):
Decaf- which can be used only for research purposes:
https://github.com/UCB-ICSI-Vision-Group/decaf-release/
Or Caffe - which is BSD licensed:
http://caffe.berkeleyvision.org/
There are other packages of which you can read about here:
http://deeplearning.net/software_links/
The most common ones are Theano, Cuda ConvNet's and OverFeat (but that's really opinion based, you should chose the best package from the list that I linked to).
The "off the shelf" ImageNet network were trained on roughly 10M images from 1000 categories. If those categories contain "dear", that you can just use them as is. If not, you can use them to extract features (as a 4096 dimensional vector in the case of Decaf) and train a classifier on positive and negative images to build a "dear classifier".
Now, once you detected the dear, meaning you have a bounding box around it, you can apply grabcut:
http://docs.opencv.org/trunk/doc/py_tutorials/py_imgproc/py_grabcut/py_grabcut.html
You'll need an initial scribble on the dear to perform grabcu. You can just take a horizontal line in the middle of the bounding box and hope that it will be on the dear's torso. More elaborate approaches would be to find the symmetry axis of the dear and use that as a scribble, but you would have to google, research an implement some method to extract symmetry axis from the image.
That's about it. Not straightforward, but so is the problem.
Please let me know if you have any questions.
Try OpenCV Background Substraction with Mixture of Gaussians models. They should be adaptable enough for your scenes. Of course, the final performance will depend on the scenario, but it is worth trying.
Since you just want to separate the background from the foreground I think you do not need to recognize the deer. You need to recognize an object in motion in the scene. You just need to separate what is static in a significant interval of time (background) from what is not static: the deer.
There are algorithms that combine multiple frames from the same scene in order to determine the background, like THIS ONE.
You mentioned that the scene mode changes with respect to weather changing or day and night considering photos of different deers.
You could implement a solution when motion is detected, instead of taking a single photo, it could take a few ones with some interval of time.
This interval has to be long as to get the deer in different positions or out of the scene and at the same time short enough to not be much affected by scene variations. Perhaps you need to deal with some brightness variation, but I think it is feasible to determine the background using these frames and finally segment the deer in the "motion frame".
I have to simulate facial expressions on a face image ( say open mouth ). For this I first extracted the facial feature points and found the corners of the lips. Now I need to deform the image by moving the points down.
In the above image I need to move the points ( 2 ) and ( 3 ) to some distance left and right respectively. And point ( 18 ) and ( 19 ) littele bit down. So that I will get an expression like opened mouth.
My Questions:
1) Is this the above way right to proceed to simulate facial expression?
2) If it is right how to move the points and create a new image in opencv?
A fairly recent survey and course of techniques people have used in this area is here:
http://old.siggraph.org/publications/2006cn/course30.pdf
TL:DR. There is no "right" way to do it, in any absolute sense. You need to define your goal in a way that is computable. Then figure out what additional (prior) information you need to reach it, in addition to the image data themselves. Fiddling with "texture warping" or other interpolation schemes before you decide what you need to do is a waste of time.
You mention "an expression like an opened mouth", and I interpret that to mean that you'd like to produce an image similar to what the real face would look like if the subject had been photographed with their mouth open. The markers you found obviously do not give enough information about that - in particular, they do not express any notion of "mouth". In fact, that notion is nowhere to be found in the image. So, strictly speaking, your task is unsolvable unless you throw more information into it.
I suggest you take a look at the paper pointed above, and rethink your problem again.
so, here we go again..
i've seen a lot of people using delauny triangulation from those points, and then texture warping or distortion in opengl or even opencv.
https://github.com/MasteringOpenCV/code/tree/master/Chapter7_HeadPoseEstimation
looks quite related, parts of the "MasteringOpencv" book are on google.books
I 'm trying to find an efficient way of acceptable complexity to
detect an object in an image so I can isolate it from its surroundings
segment that object to its sub-parts and label them so I can then fetch them at will
It's been 3 weeks since I entered the image processing world and I've read about so many algorithms (sift, snakes, more snakes, fourier-related, etc.), and heuristics that I don't know where to start and which one is "best" for what I'm trying to achieve. Having in mind that the image dataset in interest is a pretty large one, I don't even know if I should use some algorithm implemented in OpenCV or if I should implement one my own.
Summarize:
Which methodology should I focus on? Why?
Should I use OpenCV for that kind of stuff or is there some other 'better' alternative?
Thank you in advance.
EDIT -- More info regarding the datasets
Each dataset consists of 80K images of products sharing the same
concept e.g. t-shirts, watches, shoes
size
orientation (90% of them)
background (95% of them)
All pictures in each datasets look almost identical apart from the product itself, apparently. To make things a little more clear, let's consider only the 'watch dataset':
All the pictures in the set look almost exactly like this:
(again, apart form the watch itself). I want to extract the strap and the dial. The thing is that there are lots of different watch styles and therefore shapes. From what I've read so far, I think I need a template algorithm that allows bending and stretching so as to be able to match straps and dials of different styles.
Instead of creating three distinct templates (upper part of strap, lower part of strap, dial), it would be reasonable to create only one and segment it into 3 parts. That way, I would be confident enough that each part was detected with respect to each other as intended to e.g. the dial would not be detected below the lower part of the strap.
From all the algorithms/methodologies I've encountered, active shape|appearance model seem to be the most promising ones. Unfortunately, I haven't managed to find a descent implementation and I'm not confident enough that that's the best approach so as to go ahead and write one myself.
If anyone could point out what I should be really looking for (algorithm/heuristic/library/etc.), I would be more than grateful. If again you think my description was a bit vague, feel free to ask for a more detailed one.
From what you've said, here are a few things that pop up at first glance:
Simplest thing to do it binarize the image and do Connected Components using OpenCV or CvBlob library. For simple images with non-complex background this usually yeilds objects
HOwever, looking at your sample image, texture-based segmentation techniques may work better - the watch dial, the straps and the background are wisely variant in texture/roughness, and this could be an ideal way to separate them.
The roughness of a portion can be easily found by the Eigen transform (explained a bit on SO, check the link to the research paper provided there), then the Mean Shift filter can be applied on the output of the Eigen transform. This will give regions clearly separated according to texture. Both the pyramidal Mean Shift and finding eigenvalues by SVD are implemented in OpenCV, so unless you can optimize your own code its better (and easier) to use inbuilt functions (if present) as far as speed and efficiency is concerned.
I think I would turn the problem around. Instead of hunting for the dial, I would use a set of robust features from the watch to 'stitch' the target image onto a template. The first watch has a set of squares in the dial that are white, the second watch has a number of white circles. I would per type of watch:
Segment out the squares or circles in the dial. Segmentation steps can be tricky as they are usually both scale and light dependent
Estimate the centers or corners of the above found feature areas. These are the new feature points.
Use the Hungarian algorithm to match features between the template watch and the target watch. Alternatively, one can take the surroundings of each feature point in the original image and match these using cross correlation
Use matching features between the template and the target to estimate scaling, rotation and translation
Stitch the image
As the image is now in a known form, one can extract the regions simply via pre set coordinates