What criteria can be used to measure an image filtering algorithm?
I’m writing a paper related convolutional image processing, and I’m lacking criteria that can be used to measure an analyse my results. I’ve thought of it’s influence on image quality however I cannot seem to find any convincing equations or criteria for stuff like noise pollution or distortion etc.
Literally any help is much appreciated as I’m running low on time.
Image quality is a large field and can be somewhat nebulous because an improvement in one metric can directly cause a degradation in another metric. Like Nico S. commented, quality measurement is based on the application. A human user may care much more about accurate color than sharpness, while a machine vision algorithm may need minimal noise over accurate color.
Here is a great resource on image noise measurements and equations.
Here are some sharpness measurement techniques.
Here is a link to distortion measurement methods.
Don't just use the methods without reason though, figure out what is important to your application and why, then explain how your algorithm improves image quality in the specific areas that are important. Since you're working on filters, an example closer to your application might be how a Gaussian filter decreases noise. You can measure noise of an image before and after applying your filter. The tradeoff of a Gaussian filter is you lose sharpness since a Gaussian filter blurs your image. If the point of the paper is purely to provide analysis, you can present both quality metrics to show that it improves what you want it to improve, but it takes away quality from another area.
Here is one more link to other image quality factors you can explore. Good luck.
I have a general question about contrast adjustment, forgive me if it is too naive or general and please let me know if any correction is necessary.
Here is my question: When do we usually do contrast adjustment or contrast stretching in image processing or computer vision? In particular, when is it necessary to do contrast adjustments for object detection or segmentation? What are the downfall of contrast stretching, if not applied in the right situation? Can you give me a few examples as well?
Your answers are greatly appreciated!I
In general, you can classify algorithms that are related to images and videos processing int two categories:
Image + video processing
Where these algorithms can be used to enhance the quality of the image.
Computer vision:
These algorithms can be used to detect, recognize and classify objects.
The contrast adjustment techniques used to enhance the quality and the visibility of the image.
Most of the time better input quality for computer vision algorithms can lead to better results. This is why most of cv algorithms use pre-processing steps to remove noise and improve the quality.
I've found a few ways of reducing noise from image, but my task is to measure it.
So I am interested in algorithm that will give me some number, noise rating. That with that number I will be able to say that one image has less noise than others.
From a view of image processing, you can consult the classic paper "Image quality assessment: From error visibility to structural similarity" published in IEEE Transaction on Image Processing, which has already been cited 3000+ times according to Google Scholar. The basic idea is human's visual perception system is highly sensitive to structural similarity. However, noise (or distortion) often breaks such similarity. Therefore the authors tried to propose an objective measurement for image quality based on this motivation. You can find an implementation in MATLAB here.
To solve my problem I used next approach:
My noise rating is just number of pixels that were recognized as noise. To differentiate normal pixels from noise, I just calculated the medium value of its neighbor pixels and if its value was bigger than some critical value, we say that this one is noise.
if (ABS(1 - (currentPixel.R+currentPixel.G+currentPixel.B)/(neigborsMediumValues.R + neigboursMediumValues.G + neigboursMediumValues.B))) > criticalValue)
then
{
currentPixelIsNoise = TRUE;
}
I'm looking for an algorithm (ideally a C/C++ implementation) that calculates perceived similarity between two images, taking into account psychovisual factors (e.g. that difference in chroma is not as bad as difference in brightness).
I have original image and multiple variations of it (256-color quantisations in my case) and I'd like algorithm to find which image a human would judge as the best one.
The best I've found so far is SSIM, but it doesn't "understand" dithering (error diffusion) and implementation uses linear RGB (I've fixed that by implementing my own).
Alternatively, it could be algorithm that preprocesses images for comparison with SSIM/PSNR/MSE or other typical algorithm.
Well. Can't you turn it into an online job with amazon's mechanical turk?
Or make a game of it like google image labeler? You can give extra points, or payment if people agree on their scores.
The reason is I think this job is just too difficult for a computer. SSIM can't score dithered images, and if you smooth the image, to make it work with SSIM, the dither pattern can't be part of the quality judgement, because it is no longer present in the image. And that pattern is probably relevant for image quality.
Fractals have always been a bit of a mystery for me.
What practical uses (beyond rendering to beautiful images) are there for fractals in the various programming problem domains? And please, don't just list areas that use them. I'm interested in specific algorithms and how fractals are used with those algorithms to solve something in practice. Please at least give a short description of the algorithm.
Absolutely computer graphics. It's not about generating beautiful abstract images, but realistic and not repeating landscapes. Read about Fractal Landscapes.
Perlin Noise, which might be considered a simple fractal is used in computer graphics everywhere. The author joked around that if he would patent it, he'd be a millionare now. Fractals are also used in animation and lossy image compression.
A Peano curve is a space-filling fractal, which allows you to cover a 2-D area (or higher-dimensional region) uniformly with a 1-D path. If you are doing local operations on a multidimensional array, storing and/or accessing the array data in space-filling curve order can increase your cache coherence, for all levels of cache.
Fractal image compression. There are some more applications thought not all in programming here.
Error diffusion along a Hilbert curve.
It's a simple idea - suppose that you convert an image to a 0-1 black & white bitmap. Converting a 55% brightness pixel to white yields a +45% error. Instead of just forgetting it, you keep the 45% to take into account when processing the next pixel. Suppose its value is 80%. Normally it would be converted to white, but a neighboring pixel is too bright, so taking the +45% error into account, you convert it to black (80%-45%=35%), keeping a -35% error to be spread into next pixels.
This way a 75% gray area will have white/black pixel ratio close to 75/25, which is good. But if you process the pixels left-to-right, the error only spreads in one direction, which yields worse looking images. Enter space-filling curves. Processing the pixels along a Hilbert curve gets good locality of the error spread. More here, with pictures.
Fractals are used in finance for analyzing the prices of stock. The are also used in the study of complex systems (complexity theory) and in art.
One can use computer science algorithms to compute the fractal dimension, or Haussdorff dimension of black-and-white images.
It is not that difficult to implement.
It turns out that this is used in biology and medicine to analyze cell samples, for example, analyze how aggressive a cancer cell is, or how far a disease have gone. A cell is in general more healthy the higher the dimension is, meaning you wish for low fractal dimension for cancer samples.
Another uses of fractal theory is fractal image interpolation. For example, Perfect Resize 7 is using fractals to resize images with very good quality. They are, most likely, using partition iterated function systems (PIFS), that assume that different parts of an image are self-similar to each other. The algorithm is based on searching of self-similar parts of an image and describing transformation between them.
used in image compression, any mobile phone, the antenna chip design is a fractal for maximum surface area, texture generation, mountain generation, understanding trees, cliffs, jellyfish, emulating any natural phenomena where there is a degree of recursion and self similarity at different scales. a lot of scientific applications.