Face detection algorithm for 15x15 pixels face? - algorithm

I'd like to know if you are aware of any algorithm to detect low-resolution faces in an image.
The image could have any resolution and the faces resolution could be as low as 10x10 or 15x15.
I am using OpenCV but I don't think the Haar classifiers provided allow me to work with resolutions so small.
Are there any other alternative?
Thanks

The Eigenface technique is (IIRC) known to work fairly well on low-resolution images. Human faces have a distinct pattern to them that's still visible at low quality, and I believe that using a sliding window technique in conjunction with this algorithm might produce good results.

Related

How to detect a crack in an image?

How to detect such cracks that you can see in the attached images? I have tried some OpenCV algorithms like blob detection (cv::SimpleBlobDetector) but couldn't get any results.
It is a cropped image, the full image has some other features as well, so I am not sure thresholding can work because I have to get the bounding box of the detected crack. One way is to assign several (region of interest) ROI and try to detect within that ROI, but this crack doesn't appear at the same location in the image. Any idea?
Can this problem be solved with machine/deep learning (like object detection)? If I train a model with a crack dataset? Because the crack part of the image doesn't have lots of features so I am not sure this method will work. Please guide.
Thanks.
These cracks are difficult to detect because the image is noisy (presumably X-ray) and the contrast poor, so the signal-to-noise ratio is low.
I would try by applying a gaussian filter for denoising, but only in the horizontal direction, to preserve the horizontal edges. Then detection of the horizontal edges.
This is about what a Gabor filter does. You can try different orientations.
Use mathematical morphology operation.
By example Matlab code:
a=imread('in.png');
se=strel( 'disk', 7);
b = imgaussfilt(a,1.3);
c=b-imopen(b,se);
c=3*c;
d=imclearborder(c);
imwrite(d, 'out.png');

Where to find materials about edge detection and which is good for virtual wardrobe application?

I am trying to build an application called virtual wardrobe where I am planning to capture the image of a human and then allow him to select different clothing and instantly see his virtual image wearing that clothing.
I do not have much knowledge of how to go about this idea. I read a few materials and found out a few edge detecting algorithms.
Sobel seems to be fast but not very efficient while Canny is better but slow.
There are a few other algorithms like Gradient based, Laplacian, etc but I don't have much idea about those.
Are there good course materials available to understand these algorithms in details?
Also, will it be better to have an algorithm that is faster but less efficient or slower but more efficient for this application?
I do not have much knowledge about this so, any help is appreciated.
Thank you in advance.
Not sure if you have all other components, but I think using edge detection alone might not work well in many cases. Here are possible directions / techniques that you might find them useful:
foreground detection: detection which part of image is the user, this might work better than pure edge detection if your background is not simple.
face detection: detect which part of image is the user's face. This allows the cloth better fit to the user, esp. for sunglasses or hats.
skin color model: can be used as a basic alternative for face detection.
object tracking: if your input is a video, then you can also utilize object tracking technique to speed up the other detection processes.
And you might also consider other techniques such as human posture recognition or eye-tracking, but they are more complicated than the above items.
I can suggest you one solution. If you have images of various outfits then assume them as target images and replace the face of the target image with the face of the source image i.e user. For that you kinda have to built a face replacement app.If you want to detect face in the source image then go for face detection first then retrieve face boundaries from the source image. For this you can use various algorithms, of which I am suggesting few:
Canny Edge Detection followed by longest edge detection.
Skin Color Thresholding followed by shrinking and growing algorithm.
Adaptive Active Contour Model(Snake Algorithm)
Canny is bit slow, if you want the result fast go for skin color thresholding.
For accurate result you can use Snake Algorithm. Snake algorithm is useful for detecting the face boundary even if the face has shadows in it.
Read detecting face boundary using Canny Edge Detection

How can I compensate illumination changes in iris images other than using Retinex theory?

I want to make an effective illumination compensation on iris images and I want this compensation to be based on color i.e. illumination compensation using color rather than texture. I corrected my images for various mechanical errors but I want a simple algorithm to compensate the illumination based on color. Any ideas?
Try subtracting a low-pass copy of the same image?
What you are interested in is white balancing (i.e. achieving color constancy). One of the simplest algorithms is the Gray-World algorithm and I would try that one first because it's very easy to implement (even though it's not very precise).
You also might want to try some Retinex based algorithms. If so, visit this site: http://www.fer.unizg.hr/ipg/resources/color_constancy/
It contains C++ implementations of several Retinex-based color constancy algorithms.

Algorithm for culling pixels in a graphical data view?

I'm writing a wxpython widget which shows the state of several objects over time (x cycles). Right now I have it working with 1 pixel/cycle and zooming in and back out to 1:1; but I would like to allow zooming out. I wanted to see if there are any go-to algorithms for thowing away/combining data before I started rolling my own using only my own feeble heuristics. Is there any such algo, or should I just start coding my own solution?
Depends a lot on what type of images you're resizing. See The myth of infinite detail: Bilinear vs. Bicubic and Better Image Resizing by our very own Jeff! There you can compare results of naive nearest neighbor, bilinear filtering, bicubic filtering, bicubic sharper and genuine fractals.
Jeff's conclusion:
Reducing images is a completely safe
and rational operation. You're simply
reducing precision and resolution by
discarding information. Make the image
as small as you want, and you have
complete fidelity-- within the bounds
of the number of pixels you've
a> llowed. You'll get good results no
matter which algorithm you pick.
(Well, unless you pick the nave Pixel
Resize or Nearest Neighbor
algorithms.)
Enlarging images is risky. Beyond a
certain point, enlarging images is a
fool's errand; you can't magically
s> ynthesize an infinite number of new
pixels out of thin air. And
interpolated pixels are never as good
as real pixels. That's why it's more
than a little artificial to upsize the
512x512 Lena image by 500%. It'd be
smarter to find a higher resolution
scan or picture of whatever you need*
than it would be to upsize it in
software.
But when you can't avoid enlarging an
image, that's when it pays to know the
tradeoffs between bicubic, bilinear,
and more advanced resizing algorithms.
At least arm yourself with enough
knowledge to pick the best of the bad
options you have.

Practical Uses of Fractals in Programming

Fractals have always been a bit of a mystery for me.
What practical uses (beyond rendering to beautiful images) are there for fractals in the various programming problem domains? And please, don't just list areas that use them. I'm interested in specific algorithms and how fractals are used with those algorithms to solve something in practice. Please at least give a short description of the algorithm.
Absolutely computer graphics. It's not about generating beautiful abstract images, but realistic and not repeating landscapes. Read about Fractal Landscapes.
Perlin Noise, which might be considered a simple fractal is used in computer graphics everywhere. The author joked around that if he would patent it, he'd be a millionare now. Fractals are also used in animation and lossy image compression.
A Peano curve is a space-filling fractal, which allows you to cover a 2-D area (or higher-dimensional region) uniformly with a 1-D path. If you are doing local operations on a multidimensional array, storing and/or accessing the array data in space-filling curve order can increase your cache coherence, for all levels of cache.
Fractal image compression. There are some more applications thought not all in programming here.
Error diffusion along a Hilbert curve.
It's a simple idea - suppose that you convert an image to a 0-1 black & white bitmap. Converting a 55% brightness pixel to white yields a +45% error. Instead of just forgetting it, you keep the 45% to take into account when processing the next pixel. Suppose its value is 80%. Normally it would be converted to white, but a neighboring pixel is too bright, so taking the +45% error into account, you convert it to black (80%-45%=35%), keeping a -35% error to be spread into next pixels.
This way a 75% gray area will have white/black pixel ratio close to 75/25, which is good. But if you process the pixels left-to-right, the error only spreads in one direction, which yields worse looking images. Enter space-filling curves. Processing the pixels along a Hilbert curve gets good locality of the error spread. More here, with pictures.
Fractals are used in finance for analyzing the prices of stock. The are also used in the study of complex systems (complexity theory) and in art.
One can use computer science algorithms to compute the fractal dimension, or Haussdorff dimension of black-and-white images.
It is not that difficult to implement.
It turns out that this is used in biology and medicine to analyze cell samples, for example, analyze how aggressive a cancer cell is, or how far a disease have gone. A cell is in general more healthy the higher the dimension is, meaning you wish for low fractal dimension for cancer samples.
Another uses of fractal theory is fractal image interpolation. For example, Perfect Resize 7 is using fractals to resize images with very good quality. They are, most likely, using partition iterated function systems (PIFS), that assume that different parts of an image are self-similar to each other. The algorithm is based on searching of self-similar parts of an image and describing transformation between them.
used in image compression, any mobile phone, the antenna chip design is a fractal for maximum surface area, texture generation, mountain generation, understanding trees, cliffs, jellyfish, emulating any natural phenomena where there is a degree of recursion and self similarity at different scales. a lot of scientific applications.

Resources