I have to scale down image of any dimension to a fixed dimension of 135x135, most imp thing I have to maintain good quality of scaled down image. I'm not much familiar with Image Processing algos. Can you guys suggest me any algorithm.
Unless the input image is square, say 1000x1000, you will first have to crop it to a square aspect ratio (1:1) then scale it down to 135x135 pixels.
Firstly answer if you want to crop the image or deforming the image to fit in the box
Apply a 2d Sinc filter with the right size for the current scale factor.
Scan the new image and pick up pixels from the old one by just dividing.
Related
I have a square image with a ragged edge: the transparent pixels outside the image "weave" in and out towards the image center, within some unknown range. This range may be different for each side.
Is there an algorithm that would crop the image to the largest size possible with no transparent pixels remaining? I can think of an iterative one: start with a small cropping square in the center. If no transparent pixels are detected, start again but enlarge the cropping square by 1 pixel. Then repeat. Once you detect transparent pixels after cropping, go back one step and save the result.
There is an obvious algorithm that comes to mind:
Find y* = min_y {(x,y) : P(x,y) is transparent }, where P(x,y) is the pixel at coord (x,y) then crop the image [0,y*] (assuming the image starts at zero at the bottom, and that transparent pixels always happen at the top of the image.)
Note that this algorithm has serious downsides, if y* happens to be very close to 0 because of an errant transparent pixel, you will end up cropping almost your entire image.
If you want a more robust solution, I believe you will have to frame this problem as an optimization problem and solve it, allowing for some errant transparent pixels to be masked instead of cropped. The algorithm that would do well for you would be an energy-based formulation which could be solved using graph-cuts. For example, see the GrabCut algorithm.
If you know your requirements and how bad your data can get you can make a judgment call for how involved you want to make your solution, but at this point I would highly recommend clarifying the requirements of your solution as well as how bad your data can get.
I've seen a few libraries that pixelate images, some of them even feature non-square shapes such as The Pixelator's circle and diamond shapes.
I'm looking however to make a particular shape, I want a "pixel" that is 19x27 px. Essentially, the image would still look pixelated but it would use tallish rectangle shapes as the pixel base.
Are there any libraries out there that do this, if not, what alterations to existing algorithms/functions would I need to make to accomplish this?
Unless I am not understanding your question, the algorithm you need is quite simple!
Just break your image up into a grid of rectangles the size you want (in this case 19x27). Loop over each section of the grid and take the average color of the pixels inside (you can simply take the average of each channel in RGB independently). Then set all of the pixels contained inside to the average color.
This would give you an image that is the same size as your input. You could of course resize your image first to a more appropriate output size.
You might want to look up convolution matrices.
In a shader, you would use your current pixel location to grab a set of nearby pixels from the original image to render to a pixel in a new buffer image.
It is actually just a slight variation of the Box Blur image processing algorithm except that instead of grabbing from the nearby pixels you would grab by the divisions of the original image relative to the 19x27 divisions of the resulting image.
I'm looking for methods for histogram blurring in image processing. I found this old thread but the answers there does not solve my case.
One answer there suggest that
There is actually nothing called Histogram blurring.
so Is there any way for histogram blurring in image processing?
[edit1] some more info
image size is 3880x2592.
I want to blur with gaussian blur with radius about 15-20 (*pixels?).
I am using 256×16bit 8ea(*?) single ports memories.
I want to imlement this on FPGA
if by blur you mean smooth (removing high frequencies) then you can use any smooth filter or algorithm (most of them are based on FIR low pass filters)
if your question is what to smooth then the answer is same as in the question you linked it depends on what you need:
if you need smoothed histogram for some computation then smooth histogram directly and leave image be as is
if you need the image colors to be smoothed then smooth image and recompute histogram
sometimes is hard to smooth image to get smoothed histogram
(due to slow bleeding of colors or by too big data loss)
In that case you can smooth the histogram (remembering the original) then compute the area change for each color and statistically recolor whole image (it is not an easy task).
Pick (random) pixel of color a which area needs to be decreased and recolor it to closest color b thats area needs to be increased
update area of booth colors
loop until areas are matching ...
I am confused in deciding the dimensions for a background image for my website. I am creating a collage like image in picasa and it defines the image dimensions using aspect ratios. If I want to create an image of 2048 x 1800, what aspect ratio should I use?
Wolfram Alpha gives the exact result 256/225, so an aspect ratio of 256:225 is correct. (Of course, a more inexact value would be 1.14:1, which would give you something like 2052x1800.)
As the aspect ratio is the ratio of the width of the image to its height, the aspect ratio of 2048x1800 image is 1.14:1 (2048/1800 = 1.14).
Maybe you've noticed but Google Image search now has a feature where you can narrow results by color. Does anyone know how they do this? Obviously, they've indexed information about each image.
I am curious what the best methods of analyzing an image's color data to allow simple color searching.
Thanks for any and all ideas!
Averaging the colours is a great start. Just downscale your image to 10% of the original size using a Bicubic or Bilinear filter (or something advanced anyway). This will vastly reduce the colour noise and give you a result which is closer to how humans perceive the image. I.e. a pixel-raster consisting purely of yellow and blue pixels would become clean green.
If you don't blur or downsize the image, you might still end up with an average of green, but the deviation would be huge.
The Google feature offers 12 colors with which to match images. So I would calculate the Lab coordinate of each of these swatches and plot the (a*, b*) coordinate of each of these colors on a two dimensional space. I'd drop the L* component because luminance (brightness) of the pixel should be ignored. Using the 12 points in the (a*, b*) space, I'd calculate a partitioning using a Voronoi Diagram. Then for a given image, I'd take each pixel, calculate its (a*, b*) coordinate. Do this for every pixel in the image and so build up the histogram of counts in each Voronoi partition. The partition that contains the highest pixel count would then be considered the image's 'color'.
This would form the basis of the algorithm, although there would be refinements related to ignoring black and white background regions which are perceptually not considered to be part of the subject of the image.
Average color of all pixels? Make a histogram and find the average of the 'n' peaks?