I have a 3x3 PNG embedded image (colors are white-black-red-green-yellow-blue-magenta-cyan-white) that I want to display over a larger area without interpolating.
Can this be done?
SVG has an image-rendering attribute. Unfortunately none of the values guarantees nearest neighbour interpolation - optimizeSpeed is the closest. Firefox supports an additional value -moz-crisp-edges which guarantees nearest neighbour and I think Webkit has -webkit-optimize-contrast which does much the same.
Related
I am using pixel colour inspection to detect collisions. I know there are other ways to achieve this but this is my use case.
I draw a shape cloned from the main canvas on to a second canvas switching the fill and stroke colours to pur black. I then use getImageData() to get an array of pixel colours and inspect them - if I see black I have a collision with something.
However, some pixels are shades of grey because the second canvas is applying antialiasing to the shape. I want only black or transparent pixels.
How can I get the second canvas to be composed of either transparent or black only?
I have achieved this long in the past with Windows GDI via compositing/xor combinations etc. However, GDI did not always apply antialiasing. I guess the answer lies in globalCompositeOperation or filter but I cannot see what settings/filters or sequence to apply.
I appreciate I have not provided sample code but I am hoping that someone can throw me a bone and I'll work up a snippet here which might become a standard cut & paste for posterity from that.
I've seen a few libraries that pixelate images, some of them even feature non-square shapes such as The Pixelator's circle and diamond shapes.
I'm looking however to make a particular shape, I want a "pixel" that is 19x27 px. Essentially, the image would still look pixelated but it would use tallish rectangle shapes as the pixel base.
Are there any libraries out there that do this, if not, what alterations to existing algorithms/functions would I need to make to accomplish this?
Unless I am not understanding your question, the algorithm you need is quite simple!
Just break your image up into a grid of rectangles the size you want (in this case 19x27). Loop over each section of the grid and take the average color of the pixels inside (you can simply take the average of each channel in RGB independently). Then set all of the pixels contained inside to the average color.
This would give you an image that is the same size as your input. You could of course resize your image first to a more appropriate output size.
You might want to look up convolution matrices.
In a shader, you would use your current pixel location to grab a set of nearby pixels from the original image to render to a pixel in a new buffer image.
It is actually just a slight variation of the Box Blur image processing algorithm except that instead of grabbing from the nearby pixels you would grab by the divisions of the original image relative to the 19x27 divisions of the resulting image.
I got a png image like this:
The blue color is represent transparent. And all the circle is a pixel group. So, I would like to find the biggest one, and remove all the small pixel, which is not group with the biggest one. In this example, the biggest one is red colour circle, and I will retain it. But the green and yellow are to small, so I will remove them. After that, I will have something like this:
Any ideas? Thanks.
If you consider only the size of objects, use the following algorithm: labellize the connex components of the mask image of the objects (all object pixels are white, transparent ones are black). Then compute the areas of the connex components, and filter them. At this step, you have a label map and a list of authorized labels. You can read the label map and overwrite the mask image with setting every pixel to white if it has an authorized label.
OpenCV does not seem to have a labelling function, but cvFloodFill can do the same thing with several calls: for each unlabeled white pixel, call FloodFill with this pixel as marker. Then you can store the result of this step in an array (of the size of the image) by assigning each newly assigned pixel with its label. Repeat this as long as you have unlabellized pixels.
Else you can recode the connex component function for binary images, this algorithm is well known and easy to implement (maybe start with Matlab's bwlabel).
The handiest way to filter objects if you have an a priori knowledge of their size is to use morphological operators. In your case, with opencv, once you've loaded your image (OpenCV supports PNG), you have to do an "openning", that is an erosion followed by a dilation.
The small objects (smaller than the size of the structuring element you chose) will disappear with erosion, while the bigger will remain and be restored with the dilation.
(reference here, cv::morphologyEx).
The shape of the big object might be altered. If you're only doing detection, it is harmless, but if you want your object to avoid transformation you'll need to apply a "top hat" transform.
I was making a circular icon with semi-transparency, so I started with a large filled-in circle with a black border, then I did white->alpha, and resized the image to my required size. Would it have made a difference if I resized first, and then did white->alpha?
Thanks.
Yes.
In general, whenever you are re-sampling, this will have an impact if you are using any anti-aliasing, or the resampling algorithm is something other than nearest-neighbor.
Try the following exercise for a visual example:
In both cases, create your circular icon.
Case 1:
Change white-center of the circle to alpha (0%, fully transparent).
Re-sample (ie: down-sample to 25%) the entire image using something other than nearest neighbour (ie: actually use antialiasing of some sort)
Paste a copy of the result over a red background.
You should only see black and red colors inside the circle when you zoom in, with a smooth transition from black-to-red.
Case 2:
Re-sample (ie: down-sample to 25%) the entire image using something other than nearest neighbour (ie: actually use antialiasing of some sort)
Change white-center of the circle to alpha (0%, fully transparent).
Paste a copy of the result over a red background.
You should see a black outer circle, with a bit of a white halo inside of it, then the red center, with a smooth black-to-white transition, and a sharp white-to-red transition. This will depend on the aggressiveness factor you set with the magic-wand tool you are likely using to auto-select the region you want to modify the alpha properties of.
Now repeat case 2, but disable any sort of anti-aliasing, and enforce the use of a nearest neighbour algorithm rather than bi-cubic spline, Hermite, Gaussian, etc. Your results will look very similar to case 1, except you won't see the smooth transition from black-to-red when you zoom in, you will just see a sharp black-to-red transition.
In general, you will get the best subjective quality when working on your images first, then re-sampling later. If you paste it as its own layer, then you still have all the image data available any none is lost, the image is just rendered smaller.
Maybe you've noticed but Google Image search now has a feature where you can narrow results by color. Does anyone know how they do this? Obviously, they've indexed information about each image.
I am curious what the best methods of analyzing an image's color data to allow simple color searching.
Thanks for any and all ideas!
Averaging the colours is a great start. Just downscale your image to 10% of the original size using a Bicubic or Bilinear filter (or something advanced anyway). This will vastly reduce the colour noise and give you a result which is closer to how humans perceive the image. I.e. a pixel-raster consisting purely of yellow and blue pixels would become clean green.
If you don't blur or downsize the image, you might still end up with an average of green, but the deviation would be huge.
The Google feature offers 12 colors with which to match images. So I would calculate the Lab coordinate of each of these swatches and plot the (a*, b*) coordinate of each of these colors on a two dimensional space. I'd drop the L* component because luminance (brightness) of the pixel should be ignored. Using the 12 points in the (a*, b*) space, I'd calculate a partitioning using a Voronoi Diagram. Then for a given image, I'd take each pixel, calculate its (a*, b*) coordinate. Do this for every pixel in the image and so build up the histogram of counts in each Voronoi partition. The partition that contains the highest pixel count would then be considered the image's 'color'.
This would form the basis of the algorithm, although there would be refinements related to ignoring black and white background regions which are perceptually not considered to be part of the subject of the image.
Average color of all pixels? Make a histogram and find the average of the 'n' peaks?