PNG images look blurry when scaled - image

I'm a visual/UI designer working on a project/product which has been designed by another designer. This designer provided the front-end dev with good quality PNG icons, but when the front-end dev sets the images scale to 0.7, they look blurry.
I've noticed that if we set the image's scale to 0.5, they don't look blurry at all:
0.7: [1]: http://i.stack.imgur.com/jQNYG.png
0.5: [2]: http://i.stack.imgur.com/hBShu.png
Anyone knows why does that happen?
I personally always work with 0.5 scales because I was taught so. Is there any logical reason for this?
Sorry if the answer is obvious. I am really curious about that. Thanks in advance.

What is happening largely depends upon the software that you are using to shrink the image. There is a major different between reducing by 0.5 and 0.7.
If you shrink by 0.5, you are combining 4 pixels into one.
If you shrink by 0.7 you are doing fractional sampling. 10 pixels in each direction get reduced to 7.
In 0.5 sampling, you read two pixels across, read two pixels down.
In 0.7 sampling you read 1.42857142857143 pixels in each direction. In order to do that you have to weight pixel values. That is going to create blurriness in a drawing.

It's because when you halve an image's size (in both dimensions), you effectively are combining exactly 4 pixels into one. However when you do a slightly off scale (such as 0.7) you have one and a fraction of a pixel going into one pixel (in each dimension). This means the data from one pixel is being used in up to 4 pixels, instead of one, causing a blurry effect.
Sorry, making an example image would be quite difficult for me, but I hope you get the concept.

I think this has to do with interpolation, when you resize an image there is no way of knowing what is supposed to be in-between the two pixels that are essentially being merged. What the computer tries to do is guess what the new pixel is supposed to look like by looking at the pixel around it and combining the values.
So for example in the image above it will go what is in between white and orange? a less bright orange. OK lets make the merged pixel look like that. When you get to a corner there might be more orange so the new pixel will look more orangey, you get the point.
Now when you scale by 0.5 the computer looks at the pixels and merges all the pixels together at a constant rate. What I mean by that is if you look at an image and try to divide it in half you will always merge 4 pixels together however if you scale by 0.7 your merging an irregular amount of pixels resulting in different concentrations of pixels as the image is scaled which results in a blurry image.
If you don't understand this I understand, I kinda went off on a tangent.... if you need more clarification comment bellow :)

Add an .img-crisp class to the image:
.img-crisp {
image-rendering: -moz-crisp-edges; /* Firefox */
image-rendering: -o-crisp-edges; /* Opera */
image-rendering: -webkit-optimize-contrast; /* Webkit (non-standard naming) */
image-rendering: crisp-edges;
-ms-interpolation-mode: nearest-neighbor; /* IE (non-standard property) */
}
The image-rendering CSS property sets an image scaling algorithm. The
property applies to an element itself, to any images set in its other
properties, and to its descendants.
Source.

Related

What is the difference between cropping, resizing and scaling an image?

I am using Perl's
Image::Imlib2
package to generate thumbnails from larger images.
I've done such tasks before with several ImageMagick interfaces (PHP, Ruby, Python) and it was relatively easy. I have no prior experience with Imlib2 and it is a long time since I wrote something in Perl, so I am sorry if this seems naive!
This is what I've tried so far. It is simple, and assumes that scaling an image will keep the aspect ratio, and the generated thumbnail will be an exact miniature copy of the original image.
use strict;
use warnings;
use Image::Imlib2;
my $dir = 'imgs/*';
my #files = glob ($dir);
foreach my $img ( #files ) {
my $image = Image::Imlib2->load($img);
my $cropped_image = $image->create_scaled_image(50, 50);
$cropped_image->save($img);
}
Original image
Generated image
My first look at the image tells me that something is wrong. It may be my ignorance on cropping, resizing and scaling, but the generated image is displaying wrongly on small screens.
I've read What's the difference between cropping and resizing?, and honestly didn't understand anything. Also this one Image scaling.
Could someone explain the differences between those three ideas, and if possible give examples (preferably with Perl) to achieve better results? Or at least describe what I should consider when I want to create thumbnails?
The code you use isn't preserving the aspect-ratio. From Image::Imlib2::create_scaled_image
If x or y are 0, then retain the aspect ratio given in the other.
So change the line
my $cropped_image = $image->create_scaled_image(50, 50);
to
my $scaled_image = $image->create_scaled_image(50, 0);
and the new image will be 50 pixels wide, and its height computed so to keep the original aspect-ratio.
Since this is not cropping I've changed the variable name as well.
As for other questions, below is a basic discussion from comments. Please search for tutorials on image processing. Also, documentation of major libraries often have short and good explanations.
This is aggregated from comments deemed helpful. Also see Borodin's short and clear answer.
Imagine that you want to draw a picture (of some nice photograph) yourself in the following way. You draw a grid of, say, 120 (horizontally) by 60 (vertically) boxes. So 120 x 60, 720 boxes. These are your "pixels," and each you may fill with only one color. If the photo you are re-drawing is "mostly" blue at some spot, you color that pixel blue. Etc. It is not easy to end up with a faithful redrawing -- the denser the pixels the beter.
Now imagine that you want to draw another copy of this, just smaller. If you make it 20x20 that will be completely different, since it's a square. The best chance of getting it to "look the same" is to pick 2-to-1 ratio (like 120x60), so say 40x20. That's "aspect-ratio." But there is still a problem, since now you have to decide all over again what color to pick for each box, so to represent what is "mostly" on the photo at that spot. There are algorithms for that ("sampling," see your second link). That's involved with "resizing." The "quality" of the obtained drawing clearly must be much worse.
So "resizing" isn't all that simple. But, for us users, we mostly need to roughly know what is involved, and to find out how to use these features in a library. So read documentation. Some uses are very simple, while sometimes you'll have to decide which "algorithm" to let it use, or some such. Again, what I do is read manuals carefully.
The basic version of "cropping" is simple -- you just cut off a part of the picture. Say, remove the first and last 20 columns and the bottom and top 10 rows, and from the initial 120x60 you get a picture of 80x40. This is normally done when outer parts of an image have just white areas (or, worse, black!). So you want to "cut out" the picture itself from the whole image. Many graphics tools can do that on their own, by analyzing the image and figuring out those areas. Or, we select and hit a button.
I'm still not certain that you understand the difference between these terms
Your original image is 752 × 500 pixels
Resizing is a vague term that just means making a picture a different size somehow
Scaling is to change the size of an image proportionally. Scaling your picture down by a factor of ten would result in an image 75 × 50 (it should be 75.2 but we can't have 0.2 of a pixel). Scaling it up would make it bigger
You have scaled your picture to 50 × 50 pixels, which is a vertical scale of 10 (500 ÷ 5) but a horizontal scale of 15 (752 ÷ 50), so it appears squashed horizontally (or stretched vertically)
Cropping is to reduce an image by removing parts of it. To crop your image to 50 × 50 you would choose a 50 × 50 rectangle out of the whole picture and remove the rest. It would be a piece about the size of your monkey's nose, but you can pick any region you wish
zdim has shown you how you can call
$image->create_scaled_image(0, 50)
so that the height, or y-dimension, is reduced to 50, while the width, or x-dimension, is scaled by the same factor. That will result in a thumbnail 75 × 50 as above
I hope that helps
As I said in my comment, there is an
Image::Magick
Perl module if you would prefer to be back on familiar ground
Resizing and scaling is the same; you just change the size of the image. You can make it smaller or bigger.
Depending on the interface, you have to give either the new dimensions or a scaling factor for the operation. A factor less than or greater than 1.0 would make the image smaller or bigger. Smaller images are created by subsampling and bigger images by interpolation.
Cropping is very simple. You select a rectangular region of an image and that's your new image. It's like using scissors.
In your code example the image is named cropped_image although it is created through scaling, or resizing.
The output image is an image of size 50 x 50 pixels. That's what you did here:
my $cropped_image = $image->create_scaled_image(50, 50);
So no matter how your image looks before, you stuff it into 50 x 50 pixels. In this case not only reducing the resolution but also changing the aspect ratio.
The image is not displayed improperly, it's displayed perfectly fine.

Pulling non-transparent areas to the center of the transparent areas in an image

I am making an image processing project which has a few steps and stuck in one of them. Here is the thing; I have segmented an image and subtract the foreground from background. Now, I need to fill the background.
So far, I have tried the inpainting algorithms. They don't work in my case because my background images haven't at least 40% of them. I mean they fail when they are trying the complete 40% of an image. (By the way, these images have given bad results even in the Photoshop with content-aware tool.)
Anyway, I've given up trying inpainting and decided something else. In my project, I don't need to complete 100% of my background. I want to illustrate my solution;
As you see in the image above, I want to pull the image to the black area (which is transparent) with minimum corruption. Any MATLAB code samples, technique, keyword and approach would be great. If you need further explanation, feel free to ask.
I can think of two crude ways to fill the hole:
use roifill: this fills gaps in 2d image preserving image smoothness.
Alteratively, you can use bwdist to compute the nearest neighbor of each black pixel and assign it to its nearest neighbor's color:
[~, nnIdx] = bwdist( bw );
fillImg(bw) = IMG(bw);
although this code snippet works only for gray images, it is quite trivial to extend it to RGB color images.

Image Equalization to compensate for light sources

Currently I am involved in an image processing project where I am dealing with human faces. But I am facing problems with the images in cases where the light source is on either the left or right side of the face. In those cases, the portion of the image away from the light source is darker. I want to distribute the brightness over the image more evenly, so that the the brightness of darker pixels is increased and the brightness of overly bright pixels is decreased at the same time.
I had used 'gamma correction' techniques to do the same but the results are not desirable , Actually I want to create an output in which the brightness is independent of the light source, in other words , increasing the brightness of the darker part and decreasing the brightness of the brighter part. I am not sure if I reproduced the problem statement correctly but this is a very common problem and I haven't found anything useful abut this on the web.
1. Image with Light source on the right side
2. Image after increasing the brightness of the darker pixels.[img = cv2.pow(img, 0.5)]
3. Image after decreasing the brightness of Bright pixels[img = cv2.pow(img, 2.0)]
I was thinking of taking the mean of both the images 2 and 3 but as we see that the over bright pixels still persist in the image 3 , and I want to get rid of that pixels, Any suggestion ?
In the end I need an image with homogeneous brightness, and independent of the light source.
Take a look at homomorphic filtering applied to image enhancement in which you can selectively filter reflectance and illumination components of an image.
I found this paper: http://www.mirlab.org/conference_papers/International_Conference/ICASSP%202010/pdfs/0001374.pdf i think it exactly addresses the concern you have.
you will need to compute "gradient" of an image i.e. laplacian derivatives for which you can read up on this: http://docs.opencv.org/trunk/doc/py_tutorials/py_imgproc/py_gradients/py_gradients.html
i'd be very interested to know about your implementation. if you run into trouble post a comment here and i can try to help.

Fast way of getting the dominant color of an image

I have a question about how to get the dominant color of an image (a photo). I thought of this algorithm: loop through all pixels and get their color, either red, green, yellow, orange, blue, magenta, cyan, white, grey or black (with some margin of course) and it's darkness (light, dark or normal) and afterwards check which colors occurred the most. I think this is slow and not very precise. Is there a better way?
If it matters, it's a UIImage taken from an iPhone or iPod touch camera which is at most 5 Mpx. The reason it has to be fast is that simply showing a progress indicator doesn't make very much sense as this is for an app for people with bad sight, or no sight at all. Because it's for a mobile device, it may not take very much memory (at most 50 MB).
Your general approach should work, but I'd highlight some details.
Instead of your given list of colors, generate a number of color "bins" in the color spectrum to count pixels. Here's another question that has some algorithms for that: Generating spectrum color palettes Make the number of bins configurable, so you can experiment to get the results you want.
Next, for each pixel you're considering, you need to find the "nearest" color bin to increment. You'll need to define "nearest"; see this article on "color difference": http://en.wikipedia.org/wiki/Color_difference
For performance, you don't need to look at every pixel. Since image elements usually cover large areas (e.g., the sky, grass, etc.), you can get the result you want by only sampling a few pixels. I'd guess that you could get good results sampling every 10th pixel, or even every 100th. You can experiment with that factor as well.
[Editor's note: The paragraph below was edited to accommodate Mike Fairhurst's comment.]
Averaging pixels can also be done, as in this demo:jsfiddle.net/MUsT8/

How do I locate black rectangles in a grid and extract the binary code from that

i'm working in a project to recognize a bit code from an image like this, where black rectangle represents 0 bit, and white (white space, not visible) 1 bit.
Somebody have any idea to process the image in order to extract this informations? My project is written in java, but any solution is accepted.
thanks all for support.
I'm not an expert in image processing, I try to apply Edge Detection using Canny Edge Detector Implementation, free java implementation find here. I used this complete image [http://img257.imageshack.us/img257/5323/colorimg.png], reduce it (scale factor = 0.4) to have fast processing and this is the result [http://img222.imageshack.us/img222/8255/colorimgout.png]. Now, how i can decode white rectangle with 0 bit value, and no rectangle with 1?
The image have 10 line X 16 columns. I don't use python, but i can try to convert it to Java.
Many thanks to support.
This is recognising good old OMR (optical mark recognition).
The solution varies depending on the quality and consistency of the data you get, so noise is important.
Using an image processing library will clearly help.
Simple case: No skew in the image and no stretch or shrinkage
Create a horizontal and vertical profile of the image. i.e. sum up values in all columns and all rows and store in arrays. for an image of MxN (width x height) you will have M cells in horizontal profile and N cells in vertical profile.
Use a thresholding to find out which cells are white (empty) and which are black. This assumes you will get at least a couple of entries in each row or column. So black cells will define a location of interest (where you will expect the marks).
Based on this, you can define in lozenges in the form and you get coordinates of lozenges (rectangles where you have marks) and then you just add up pixel values in each lozenge and based on the number, you can define if it has mark or not.
Case 2: Skew (slant in the image)
Use fourier (FFT) to find the slant value and then transform it.
Case 3: Stretch or shrink
Pretty much the same as 1 but noise is higher and reliability less.
Aliostad has made some good comments.
This is OMR and you will find it much easier to get good consistent results with a good image processing library. www.leptonica.com is a free open source 'C' library that would be a very good place to start. It could process the skew and thresholding tasks for you. Thresholding to B/W would be a good start.
Another option would be IEvolution - http://www.hi-components.com/nievolution.asp for .NET.
To be successful you will need some type of reference / registration marks to allow for skew and stretch especially if you are using document scanning or capturing from a camera image.
I am not familiar with Java, but in Python, you can use the imaging library to open the image. Then load the height and the widths, and segment the image into a grid accordingly, by Height/Rows and Width/Cols. Then, just look for black pixels in those regions, or whatever color PIL registers that black to be. This obviously relies on the grid like nature of the data.
Edit:
Doing Edge Detection may also be Fruitful. First apply an edge detection method like something from wikipedia. I have used the one found at archive.alwaysmovefast.com/basic-edge-detection-in-python.html. Then convert any grayscale value less than 180 (if you want the boxes darker just increase this value) into black and otherwise make it completely white. Then create bounding boxes, lines where the pixels are all white. If data isn't terribly skewed, then this should work pretty well, otherwise you may need to do more work. See here for the results: http://imm.io/2BLd
Edit2:
Denis, how large is your dataset and how large are the images? If you have thousands of these images, then it is not feasible to manually remove the borders (the red background and yellow bars). I think this is important to know before proceeding. Also, I think the prewitt edge detection may prove more useful in this case, since there appears to be less noise:
The previous method of segmenting may be applied, if you do preprocess to bin in the following manner, in which case you need only count the number of black or white pixels and threshold after some training samples.

Resources