I'm interested in processing a bitmap in Java using the same (or similar) technique as the Highlight recovery tool in Photoshop. (That would be the Image->Adjustments->Shadow/Highlight tool in CS4.)
I googled around, and found very little outside of discussion about existing tools that do the job.
Any ideas?
Just guessing because I don't have Photoshop - only going by the descriptions I find on the web.
The Radius control is probably used in a Gaussian Blur to get the average value around a pixel, to determine its level of highlight or shadow. Shadows will be closer to 0 while highlights will be closer to 255. The exact definition of "close" will be determined by the Tonal Width control. For example, at 100% maybe the shadows go from 0-63 and the highlights go from 192-255.
The Amount corresponds to the amount of brightness change desired - again I don't know the scale, or what equates to 100%. Changing the brightness of the shadows requires multiplying by a constant value - for example to brighten it by 100% would require multiplying by 2. You want to scale this by the shadow value determined above. The highlights work similarly, except working down from 255 instead of up from 0.
Related
I have DirectWrite setup to render single glyphs and then shape them programmatically based on the glyph run and font metrics. (Due to my use case, I can't store every full texture in an OpenGL texture otherwise it's essentially a memory leak. So we store each glyph into one texture to lay them out glyph by glyph.)
However, I have two issues:
Inconsistent rendering results.
Scaling metrics leads to inconsistent distances between glyphs.
These results are are transferred to a bitmap using Direct2D and WIC bitmap (CreateWicBitmapRenderTarget).
Let's look at an example, font size 12 with Segoe UI.
Full string rendered 1st line is rendered using DrawTextLayout drawn with D2D1_DRAW_TEXT_OPTIONS_ENABLE_COLOR_FONT. 2nd line is drawn with each Glyph using DrawGlyphRun with DWRITE_MEASURING_MODE_NATURAL. 3rd is rendered with paint.net just for reference.
This leads to the second issue, the distance between each letter can be off. I am not sure if this is a symptom of the previous issue. You can see the distance between s and P is now 2 pixels when drawn separately. Because i is no longer 3 pixels wide, it visually looks too close to c now when zoomed out. p and e look too close.
I have checked the metrics, and I am receiving the right metrics from the font from shaping. Above string metrics from DirectWrite : [1088.0, 1204.0, 1071.0, 946.0, 496.0, 1071.0, 869.0]. I am comparing output with Harfbuzz: [S=0+1088|p=1+1204|e=2+1071|c=3+946|i=4+496|e=5+1071|s=6+869] which is correct.
To convert to DIP I am using this formula for the ratio multiplier: (size * dpi) / 72 / metrics.designUnitsPerEm
So with a default DPI of 96 and default size of 12 we get the following ratio: 0.0078125.
Let's look at S is 1088. So the advance should be 1088 * 0.0078125 = 8.5. Since we can't write between half a pixel, which way do we go? I tried every value from the lsb, to the advance, to render offset in every combination of flooring, ceiling, rounding, converting to int. Whichever way I choose, even if it fixes it for one situation, I'll test with another font, or another size, it will be one or two pixels too close in another string. I just can't seem to find a proper balance that is universal.
I am not really sure where to go from here. Any suggestions are appreciated. Here is the code: https://github.com/pyglet/pyglet/blob/master/pyglet/font/directwrite.py#L1736
EDIT: After a suggestion of using DrawGlyphRun using the full run, it does appear exactly what the DrawTextLayout outputs. So the DrawGlyphRun should produce the same appearance. Here's where it gets interesting:
Line1: DrawTextLayout
Line2: Single glyphs drawn by DrawGlyphRun
Line3: All glyphs drawn using DrawGlyphRun
You can see something interesting. If I render each 'c' by itself (right side), you can see that it has 4 pixels on the left of the character by itself. But in the strings it looks like it's missing. Actually, taking a deeper look, and a color dropper, it appears the color is indeed there, but it's much darker. So somehow each glyph is affecting the blend of the pixels around it. I am not really sure how it's doing this.
EDIT2: Talking with another, I think we narrowed this down to anti-aliasing. Applying the antialias to the whole string vs each character produces a different result. Setting D2D1_TEXT_ANTIALIAS_MODE_ALIASED each character looks and appears exactly the same now compared to both.
I am color blind. This is usually not a problem for me, but whenever a game or application uses red and green as contrast colors it can become an annoyance. Looking for accessibility tools online, all I've managed to find are tools that adjust colors on snapshots, or on a camera input. For this reason, I've been toying with the idea of writing my own color adjustment tool.
Suppose I would want to write my own shader or post effect that shifts or swaps color values, then apply it to everything I see in Windows (10) in real-time - is there a way to do this? For example, is it possible to insert a pixel shader into the Windows rendering pipeline? Or would it be possible to write an application that takes the entire screen as input and outputs something else, faster than say 5ms per frame using a modern GPU? How would you approach this?
Developing using Matlab 2014b's GUIDE, some of my GUIs have elements with units specified as "characters". Depending on the screen magnification level in Windows 7 (Control Panel>Appearance>Display) the GUI will look very different, with elements scattered. Shouldn't using characters as the unit type make adapting to the screen magnification a piece of cake, since the system character size would change I believe?
I'd rather not need to hard-code the units as pixels or etc, so that the GUI is happy being used on Windows/Linux/Mac. Anyone have any experiences/ suggestions with this?
I have found it is easiest to use pixels. You can then get the current window size and set things as percentages (from variables) of the real pixel dimensions. This is nice when you want to make sure there is a minimum or maximum panel or item size that can be resized or scaled within a range.
If you put this code in the resizeFcn() it should be good.
I want a formula to detect/calculate the change in visible luminosity in a part of the image,provided i can calculate the RGB, HSV, HSL and CMYK color spaces.
E.g: In the above picture we will notice that the left side of the image is more bright when compared to the right side , which is beneath a shade.
I have had a little think about this, and done some experiments in Photoshop, though you could just as well use ImageMagick which is free. Here is what I came up with.
Step 1 - Convert to Lab mode and discard the a and b channels since the Lightness channel holds most of the brightness information which, ultimately, is what we are looking for.
Step 2 - Stretch the contrast of the remaining L channel (using Levels) to accentuate the variation.
Step 3 - Perform a Gaussian blur on the image to remove local, high frequency variations in the image. I think I used 10-15 pixels radius.
Step 4 - Turn on the Histogram window and take a single row marquee and watch the histogram change as different rows are selected.
Step 5 - Look out for a strongly bimodal histogram (two distimct peaks) to identify the illumination variations.
This is not a complete, general purpose solution, but may hold some pointers and cause people who know better to suggest improvememnts for you!!! Note that the method requires the image to have a some areas of high uniformity like the whiteish horizontal bar across your input image. However, nearly any algorithm is going to have a hard time telling the difference between a sheet of white paper with a shadow of uneven light across it and the same sheet of paper with a grey sheet of paper laid on top of it...
In the images below, I have superimposed the histogram top right. In the first one, you can see the histogram is not narrow and bimodal because the dotted horizontal selection marquee is across the bar-code area of the image.
In the subsequent images, you can see a strong bimodal histogram because the dotted selection marquee is across a uniform area of image.
The first problem is in "visible luminosity". It me mean one of several things. This discussion should be a good start. (Yes, it has incomplete and contradictory answers, as well.)
Formula to determine brightness of RGB color
You should make sure you operate on the linear image which does not have any gamma correction applied to it. AFAIK Photoshop does not degamma and regamma images during filtering, which may produce erroneous results. It all depends on how accurate results you want. Photoshop wants things to look good, not be precise.
In principle you should first pick a formula to convert your RGB values to some luminosity value which fits your use. Then you have a single-channel image which you'll need to filter with a Gaussian filter, sliding average, or some other suitable filter. Unfortunately, this may require special tools as photoshop/gimp/etc. type programs tend to cut corners.
But then there is one thing you would probably like to consider. If you have an even brightness gradient across an image, the eye is happy and does not perceive it. Rather large differences go unnoticed if the contrast in the image is constant across the image. Unfortunately, the definition of contrast is not very meaningful if you do not know at least something about the content of the image. (If you have scanned/photographed documents, then the contrast is clearly between ink and paper.) In your sample image the brightness changes quite abruptly, which makes the change visible.
Just to show you how strange the human vision is in determining "brightness", see the classical checker shadow illusion:
http://en.wikipedia.org/wiki/Checker_shadow_illusion
So, my impression is that talking about the conversion formulae is probably the second or third step in the process of finding suitable image processing methods. The first step would be to try to define the problem in more detail. What do you want to accomplish?
i'm working in a project to recognize a bit code from an image like this, where black rectangle represents 0 bit, and white (white space, not visible) 1 bit.
Somebody have any idea to process the image in order to extract this informations? My project is written in java, but any solution is accepted.
thanks all for support.
I'm not an expert in image processing, I try to apply Edge Detection using Canny Edge Detector Implementation, free java implementation find here. I used this complete image [http://img257.imageshack.us/img257/5323/colorimg.png], reduce it (scale factor = 0.4) to have fast processing and this is the result [http://img222.imageshack.us/img222/8255/colorimgout.png]. Now, how i can decode white rectangle with 0 bit value, and no rectangle with 1?
The image have 10 line X 16 columns. I don't use python, but i can try to convert it to Java.
Many thanks to support.
This is recognising good old OMR (optical mark recognition).
The solution varies depending on the quality and consistency of the data you get, so noise is important.
Using an image processing library will clearly help.
Simple case: No skew in the image and no stretch or shrinkage
Create a horizontal and vertical profile of the image. i.e. sum up values in all columns and all rows and store in arrays. for an image of MxN (width x height) you will have M cells in horizontal profile and N cells in vertical profile.
Use a thresholding to find out which cells are white (empty) and which are black. This assumes you will get at least a couple of entries in each row or column. So black cells will define a location of interest (where you will expect the marks).
Based on this, you can define in lozenges in the form and you get coordinates of lozenges (rectangles where you have marks) and then you just add up pixel values in each lozenge and based on the number, you can define if it has mark or not.
Case 2: Skew (slant in the image)
Use fourier (FFT) to find the slant value and then transform it.
Case 3: Stretch or shrink
Pretty much the same as 1 but noise is higher and reliability less.
Aliostad has made some good comments.
This is OMR and you will find it much easier to get good consistent results with a good image processing library. www.leptonica.com is a free open source 'C' library that would be a very good place to start. It could process the skew and thresholding tasks for you. Thresholding to B/W would be a good start.
Another option would be IEvolution - http://www.hi-components.com/nievolution.asp for .NET.
To be successful you will need some type of reference / registration marks to allow for skew and stretch especially if you are using document scanning or capturing from a camera image.
I am not familiar with Java, but in Python, you can use the imaging library to open the image. Then load the height and the widths, and segment the image into a grid accordingly, by Height/Rows and Width/Cols. Then, just look for black pixels in those regions, or whatever color PIL registers that black to be. This obviously relies on the grid like nature of the data.
Edit:
Doing Edge Detection may also be Fruitful. First apply an edge detection method like something from wikipedia. I have used the one found at archive.alwaysmovefast.com/basic-edge-detection-in-python.html. Then convert any grayscale value less than 180 (if you want the boxes darker just increase this value) into black and otherwise make it completely white. Then create bounding boxes, lines where the pixels are all white. If data isn't terribly skewed, then this should work pretty well, otherwise you may need to do more work. See here for the results: http://imm.io/2BLd
Edit2:
Denis, how large is your dataset and how large are the images? If you have thousands of these images, then it is not feasible to manually remove the borders (the red background and yellow bars). I think this is important to know before proceeding. Also, I think the prewitt edge detection may prove more useful in this case, since there appears to be less noise:
The previous method of segmenting may be applied, if you do preprocess to bin in the following manner, in which case you need only count the number of black or white pixels and threshold after some training samples.