How to set pixelwise ink limit for DEVICE=tiffsep1 - ghostscript

I am using ghostscript's tiffsep1 device as a RIP for an inkjet system and want to limit the amount of ink applied per dot. I got the impression that the inklimits from ICC profiles only work on the coarser DitherPPI resolution but not the the DPI. Is there a way to limit on this resolution as well?
For more clarity, see these two images of a (0.75, 0.5, 0, 0) CMYK colored area:
Cyan:
Magenta:
Clearly the upper left pixel is set in both channels, giving 200% ink. If I wanted to limit it to 100%. Even when I give it a "global" inklimit of 100% by scaling down to (0.6, 0.4, 0, 0) CMYK we get this:
Cyan:
Magenta:
so still enough overlapping area where 200% ink would be reached pixelwise.
Is there some setting in ghostscript to avoid this?
PS: I am aware that the situation is worse here because all channels use the same angle and frequency so the cell centers will aways overlap. However, even with default angles we would eventually get some pixelwise overlap in some places which is what I want to avoid.

As far as I know there isn't any way to do what you want. You could produce a contone image and halftone it yourself of course.

Related

Calculate contrast of a color image(RGB)

In black and white image,we can easily calculate the contrast by (total no. of white pixels - total no. of black pixels).
How can I calculate this for color(RGB) image?
Any idea will be appreciated?
You may use the standard Deviation of the grayscale image as measure for the contrast. This is called "RMS contrast". See https://en.wikipedia.org/wiki/Contrast_(vision)#RMS_contrast for Details.
Contrast is defined as the difference between the highest and lowest intensity value of the image. So you can easily calculate it from the respective histogram.
Example: If you have a plain white image, the lowest and highest value are both 255, thus the contrast is 255-255=0. If you have an image with only black (0) and white (255) you have a contrast of 255, the highest possible value.
The same method can be applied for color images if you calculate the luminescence of each pixel (and thus convert the image to greyscale). There are several different methods to convert images to greyscale, you can chose one you like.
To make the approach more sophisticated, it is advisable to ignore a certain percentage of pixels to account for outliers (otherwise a single white and black pixel would lead to "full contrast", regardless of all other pixels). Another approach would be to take the number of dark and light pixels into account, as described by #Yves Daoust. This approach has the flaw that one has to set an arbitrary threshold to determine which pixels count as dark/light (usually 127).
This does not have a single answer. One idea I can think of is to operate on each of the three channels separately, Red, Green and Blue. Compute the histogram of each channel and operate on it.
A simple google search resulted in many relevant algorithms, one of them that I have used is Root Mean Square (standard deviation of the pixel intensities).

PNG images look blurry when scaled

I'm a visual/UI designer working on a project/product which has been designed by another designer. This designer provided the front-end dev with good quality PNG icons, but when the front-end dev sets the images scale to 0.7, they look blurry.
I've noticed that if we set the image's scale to 0.5, they don't look blurry at all:
0.7: [1]: http://i.stack.imgur.com/jQNYG.png
0.5: [2]: http://i.stack.imgur.com/hBShu.png
Anyone knows why does that happen?
I personally always work with 0.5 scales because I was taught so. Is there any logical reason for this?
Sorry if the answer is obvious. I am really curious about that. Thanks in advance.
What is happening largely depends upon the software that you are using to shrink the image. There is a major different between reducing by 0.5 and 0.7.
If you shrink by 0.5, you are combining 4 pixels into one.
If you shrink by 0.7 you are doing fractional sampling. 10 pixels in each direction get reduced to 7.
In 0.5 sampling, you read two pixels across, read two pixels down.
In 0.7 sampling you read 1.42857142857143 pixels in each direction. In order to do that you have to weight pixel values. That is going to create blurriness in a drawing.
It's because when you halve an image's size (in both dimensions), you effectively are combining exactly 4 pixels into one. However when you do a slightly off scale (such as 0.7) you have one and a fraction of a pixel going into one pixel (in each dimension). This means the data from one pixel is being used in up to 4 pixels, instead of one, causing a blurry effect.
Sorry, making an example image would be quite difficult for me, but I hope you get the concept.
I think this has to do with interpolation, when you resize an image there is no way of knowing what is supposed to be in-between the two pixels that are essentially being merged. What the computer tries to do is guess what the new pixel is supposed to look like by looking at the pixel around it and combining the values.
So for example in the image above it will go what is in between white and orange? a less bright orange. OK lets make the merged pixel look like that. When you get to a corner there might be more orange so the new pixel will look more orangey, you get the point.
Now when you scale by 0.5 the computer looks at the pixels and merges all the pixels together at a constant rate. What I mean by that is if you look at an image and try to divide it in half you will always merge 4 pixels together however if you scale by 0.7 your merging an irregular amount of pixels resulting in different concentrations of pixels as the image is scaled which results in a blurry image.
If you don't understand this I understand, I kinda went off on a tangent.... if you need more clarification comment bellow :)
Add an .img-crisp class to the image:
.img-crisp {
image-rendering: -moz-crisp-edges; /* Firefox */
image-rendering: -o-crisp-edges; /* Opera */
image-rendering: -webkit-optimize-contrast; /* Webkit (non-standard naming) */
image-rendering: crisp-edges;
-ms-interpolation-mode: nearest-neighbor; /* IE (non-standard property) */
}
The image-rendering CSS property sets an image scaling algorithm. The
property applies to an element itself, to any images set in its other
properties, and to its descendants.
Source.

Image Equalization to compensate for light sources

Currently I am involved in an image processing project where I am dealing with human faces. But I am facing problems with the images in cases where the light source is on either the left or right side of the face. In those cases, the portion of the image away from the light source is darker. I want to distribute the brightness over the image more evenly, so that the the brightness of darker pixels is increased and the brightness of overly bright pixels is decreased at the same time.
I had used 'gamma correction' techniques to do the same but the results are not desirable , Actually I want to create an output in which the brightness is independent of the light source, in other words , increasing the brightness of the darker part and decreasing the brightness of the brighter part. I am not sure if I reproduced the problem statement correctly but this is a very common problem and I haven't found anything useful abut this on the web.
1. Image with Light source on the right side
2. Image after increasing the brightness of the darker pixels.[img = cv2.pow(img, 0.5)]
3. Image after decreasing the brightness of Bright pixels[img = cv2.pow(img, 2.0)]
I was thinking of taking the mean of both the images 2 and 3 but as we see that the over bright pixels still persist in the image 3 , and I want to get rid of that pixels, Any suggestion ?
In the end I need an image with homogeneous brightness, and independent of the light source.
Take a look at homomorphic filtering applied to image enhancement in which you can selectively filter reflectance and illumination components of an image.
I found this paper: http://www.mirlab.org/conference_papers/International_Conference/ICASSP%202010/pdfs/0001374.pdf i think it exactly addresses the concern you have.
you will need to compute "gradient" of an image i.e. laplacian derivatives for which you can read up on this: http://docs.opencv.org/trunk/doc/py_tutorials/py_imgproc/py_gradients/py_gradients.html
i'd be very interested to know about your implementation. if you run into trouble post a comment here and i can try to help.

Fitting an image to Gaussian distribution

I'm implementing a gaussian PDF to which I've already fitted a histogram (matched)
However with changing the MEAN and SD values, there should be changes in the output image- I don't seem to be getting any.
Can someone please explain in the context of images, how would varying the SD & MEAN affect it?
- if mean = 30, SD= 10, the image would be lighter(merge bright) compared to mean=30,SD=80 ?
Mean will correspond to the overall average brightness of the image and sd will correspond to the contrast, that is, the difference between the brightest and darkest parts of the image. So if the mean remains the same, as in your example, but sd is increased, then overall average brightness remains the same, but the darkest parts of the image get darker and the brightest parts get brighter, increasing the contrast.

Fast way of getting the dominant color of an image

I have a question about how to get the dominant color of an image (a photo). I thought of this algorithm: loop through all pixels and get their color, either red, green, yellow, orange, blue, magenta, cyan, white, grey or black (with some margin of course) and it's darkness (light, dark or normal) and afterwards check which colors occurred the most. I think this is slow and not very precise. Is there a better way?
If it matters, it's a UIImage taken from an iPhone or iPod touch camera which is at most 5 Mpx. The reason it has to be fast is that simply showing a progress indicator doesn't make very much sense as this is for an app for people with bad sight, or no sight at all. Because it's for a mobile device, it may not take very much memory (at most 50 MB).
Your general approach should work, but I'd highlight some details.
Instead of your given list of colors, generate a number of color "bins" in the color spectrum to count pixels. Here's another question that has some algorithms for that: Generating spectrum color palettes Make the number of bins configurable, so you can experiment to get the results you want.
Next, for each pixel you're considering, you need to find the "nearest" color bin to increment. You'll need to define "nearest"; see this article on "color difference": http://en.wikipedia.org/wiki/Color_difference
For performance, you don't need to look at every pixel. Since image elements usually cover large areas (e.g., the sky, grass, etc.), you can get the result you want by only sampling a few pixels. I'd guess that you could get good results sampling every 10th pixel, or even every 100th. You can experiment with that factor as well.
[Editor's note: The paragraph below was edited to accommodate Mike Fairhurst's comment.]
Averaging pixels can also be done, as in this demo:jsfiddle.net/MUsT8/

Resources