Convolution effect changed from Gimp version 2.8.22 to 2.10.18 - filter

I recently had the task to apply several convolution filters at university. While playing around with Gimp version 2.10.18, I noticed that the filters from the exercises I applied did not have the supposed outcome.
I found out that convolution behavior changed from Gimp 2.8.22 to 2.10.18 and wanted to ask if someone knew how to get the old behavior back.
Let me explain what should happen and what actually happens in 2.10.18:
My sample picture looks like this (these are the values in all its pixel rows):
90 90 150 210 210
I now apply the filter
0 0 0
0 2 0
0 0 0
with divisor 1 and offset 0.
The maths behind it and Gimp 2.8 tell me that the outcome should be composed of
180 values on the left side, 255 on the right side
I don't understand what Gimp 2.10 does, but the outcome just has brighter values (90->125, 150->205, 210->255) instead of the expected change.
Is this a bug or am I somehow missing something? Thanks!

A big difference between 2.10 (high-bit-depth) and previous versions (8-bit), is that the 2.10 works in "linear light". In 2.8, the 0..255 values pixels are not a linear representation of the color/luminosity but are gamma-corrected (so that there are more values for dark tones(*)). Most Gimp 2.8 tools work (incorrectly) directly on these gamma-corrected values. In Gimp 2.10, if you are in 8-bit (and in the general case, using gamma-corrected representation, but this is mostly useful in 8-bit), the pixel data is converted to 32-bit FP linear, removing the gamma compensation, then the required transformation is applied, then the data is converted back to 8-bit, with the gamma compensation reinstated.
June 2021 Edit: in 2.10, if you put the image in a high-precision mode, and use the values that are the mathematical equivalents to 90/255, 15O/255 and 210/255:
... you get a result that is equivalent to 180/255:
Which confirms that in 2.10 convolution operates on "linear light".
So
If you want the old behavior, use the old Gimp. But you have to keep in mind that the old behavior was incorrect, even if some workflows could take advantage of it.
If you wanted to see what a spatial convolution matrix can do, then use Gimp 2.10 in "linear light".
(*) Try this: open two images in Gimp, fill one with a checkerboard pattern and one with grey (128,128,128). Step back until the checkerboard becomes a uniform gray. You'll notice that the plain gray image is darker... so (128,128,128) is not the middle of the luminosity range.

Related

Slight differences in pixel values between OpenCV and Matlab

I am trying to port some old matlab code to python. I chose OpenCV as I am familiar with the library. Despite that, I found results differ a bit (this program seems very sensible to small changes in texture), and I found pixel values are sightly different even with just reading the image from disk (I thought It could be some antialiasing or odd behavior when reecaling, but its there even before modifying anything)
I am aware of the different color order (RGB in matlab by default, BGR on OpenCV), but still pixel values are sometimes off by +-2 units (on 8-bit per color images). See for example in the following screencap, second pixel is 5-14-9 (RGB) when in matlab its 5-14-11. First pixel is exactly the same value.
I can't think of any way to check the EXACT transformation/rounding that matlab is performing, or why this is different in the first place. Any Ideas on this matter?
Are you sure that you are looking at the correct pixel?
Matlab and Python differ in indexing, in Matlab the first index is 1, and in Python the first index is 0.
My guess is that you should be comparing the Matlab pixel [2,1] with the Python pixel value at index 0, which is 5-14-11 like the one in Matlab.

Algorithm to detect the change in visible luminosity in an image

I want a formula to detect/calculate the change in visible luminosity in a part of the image,provided i can calculate the RGB, HSV, HSL and CMYK color spaces.
E.g: In the above picture we will notice that the left side of the image is more bright when compared to the right side , which is beneath a shade.
I have had a little think about this, and done some experiments in Photoshop, though you could just as well use ImageMagick which is free. Here is what I came up with.
Step 1 - Convert to Lab mode and discard the a and b channels since the Lightness channel holds most of the brightness information which, ultimately, is what we are looking for.
Step 2 - Stretch the contrast of the remaining L channel (using Levels) to accentuate the variation.
Step 3 - Perform a Gaussian blur on the image to remove local, high frequency variations in the image. I think I used 10-15 pixels radius.
Step 4 - Turn on the Histogram window and take a single row marquee and watch the histogram change as different rows are selected.
Step 5 - Look out for a strongly bimodal histogram (two distimct peaks) to identify the illumination variations.
This is not a complete, general purpose solution, but may hold some pointers and cause people who know better to suggest improvememnts for you!!! Note that the method requires the image to have a some areas of high uniformity like the whiteish horizontal bar across your input image. However, nearly any algorithm is going to have a hard time telling the difference between a sheet of white paper with a shadow of uneven light across it and the same sheet of paper with a grey sheet of paper laid on top of it...
In the images below, I have superimposed the histogram top right. In the first one, you can see the histogram is not narrow and bimodal because the dotted horizontal selection marquee is across the bar-code area of the image.
In the subsequent images, you can see a strong bimodal histogram because the dotted selection marquee is across a uniform area of image.
The first problem is in "visible luminosity". It me mean one of several things. This discussion should be a good start. (Yes, it has incomplete and contradictory answers, as well.)
Formula to determine brightness of RGB color
You should make sure you operate on the linear image which does not have any gamma correction applied to it. AFAIK Photoshop does not degamma and regamma images during filtering, which may produce erroneous results. It all depends on how accurate results you want. Photoshop wants things to look good, not be precise.
In principle you should first pick a formula to convert your RGB values to some luminosity value which fits your use. Then you have a single-channel image which you'll need to filter with a Gaussian filter, sliding average, or some other suitable filter. Unfortunately, this may require special tools as photoshop/gimp/etc. type programs tend to cut corners.
But then there is one thing you would probably like to consider. If you have an even brightness gradient across an image, the eye is happy and does not perceive it. Rather large differences go unnoticed if the contrast in the image is constant across the image. Unfortunately, the definition of contrast is not very meaningful if you do not know at least something about the content of the image. (If you have scanned/photographed documents, then the contrast is clearly between ink and paper.) In your sample image the brightness changes quite abruptly, which makes the change visible.
Just to show you how strange the human vision is in determining "brightness", see the classical checker shadow illusion:
http://en.wikipedia.org/wiki/Checker_shadow_illusion
So, my impression is that talking about the conversion formulae is probably the second or third step in the process of finding suitable image processing methods. The first step would be to try to define the problem in more detail. What do you want to accomplish?

How to reconstruct Bayer to RGB from Canon RAW data?

I'm trying to reconstruct RGB from RAW Bayer data from a Canon DSLR but am having no luck. I've taken a peek at the dcraw.c source, but its lack of comments makes it a bit tough to get through. Anyway, I have debayering working but I need to then take this debayered data and get something that looks correct. My current code does something like this, in order:
Demosaic/debayer
Apply white balance multipliers (I'm using the following ones: 1.0, 2.045, 1.350. These work perfectly in Adobe Camera Raw as 5500K, 0 Tint.)
Multiply the result by the inverse of the camera's color matrix
Multiply the result by an XYZ to sRGB matrix fromm Bruce Lindbloom's site (the D50 sRGB one)
Set white/black point, I am using an input levels control for this
Adjust gamma
Some of what I've read says to apply the white balance and black point correction before the debayer. I've tried, but it's still broken.
Do these steps look correct? I'm trying to determine if the problem is 1.) my sequence of operations, or 2.) the actual math being used.
The first step should be setting black and saturation point because you need to apply white balance looking after saturated pixels in order to avoid magenta highlights:
And before demosaicing, apply white balacing. See here (http://www.guillermoluijk.com/tutorial/dcraw/index_en.htm) how applying white balance before demosaicing introduce artifacts.
After the first step (debayer) you should have a proper RGB image with right colors. Remaining steps are just cosmetics. So I'm guessing there's something wrong at step one.
One problem could be the Bayer pattern you're using to generate RGB image is different from the CFA pattern of the camera. Match sensor alignment in your code to that of the camera!

Simulating the highlight recovery tool from Photoshop

I'm interested in processing a bitmap in Java using the same (or similar) technique as the Highlight recovery tool in Photoshop. (That would be the Image->Adjustments->Shadow/Highlight tool in CS4.)
I googled around, and found very little outside of discussion about existing tools that do the job.
Any ideas?
Just guessing because I don't have Photoshop - only going by the descriptions I find on the web.
The Radius control is probably used in a Gaussian Blur to get the average value around a pixel, to determine its level of highlight or shadow. Shadows will be closer to 0 while highlights will be closer to 255. The exact definition of "close" will be determined by the Tonal Width control. For example, at 100% maybe the shadows go from 0-63 and the highlights go from 192-255.
The Amount corresponds to the amount of brightness change desired - again I don't know the scale, or what equates to 100%. Changing the brightness of the shadows requires multiplying by a constant value - for example to brighten it by 100% would require multiplying by 2. You want to scale this by the shadow value determined above. The highlights work similarly, except working down from 255 instead of up from 0.

How do I locate black rectangles in a grid and extract the binary code from that

i'm working in a project to recognize a bit code from an image like this, where black rectangle represents 0 bit, and white (white space, not visible) 1 bit.
Somebody have any idea to process the image in order to extract this informations? My project is written in java, but any solution is accepted.
thanks all for support.
I'm not an expert in image processing, I try to apply Edge Detection using Canny Edge Detector Implementation, free java implementation find here. I used this complete image [http://img257.imageshack.us/img257/5323/colorimg.png], reduce it (scale factor = 0.4) to have fast processing and this is the result [http://img222.imageshack.us/img222/8255/colorimgout.png]. Now, how i can decode white rectangle with 0 bit value, and no rectangle with 1?
The image have 10 line X 16 columns. I don't use python, but i can try to convert it to Java.
Many thanks to support.
This is recognising good old OMR (optical mark recognition).
The solution varies depending on the quality and consistency of the data you get, so noise is important.
Using an image processing library will clearly help.
Simple case: No skew in the image and no stretch or shrinkage
Create a horizontal and vertical profile of the image. i.e. sum up values in all columns and all rows and store in arrays. for an image of MxN (width x height) you will have M cells in horizontal profile and N cells in vertical profile.
Use a thresholding to find out which cells are white (empty) and which are black. This assumes you will get at least a couple of entries in each row or column. So black cells will define a location of interest (where you will expect the marks).
Based on this, you can define in lozenges in the form and you get coordinates of lozenges (rectangles where you have marks) and then you just add up pixel values in each lozenge and based on the number, you can define if it has mark or not.
Case 2: Skew (slant in the image)
Use fourier (FFT) to find the slant value and then transform it.
Case 3: Stretch or shrink
Pretty much the same as 1 but noise is higher and reliability less.
Aliostad has made some good comments.
This is OMR and you will find it much easier to get good consistent results with a good image processing library. www.leptonica.com is a free open source 'C' library that would be a very good place to start. It could process the skew and thresholding tasks for you. Thresholding to B/W would be a good start.
Another option would be IEvolution - http://www.hi-components.com/nievolution.asp for .NET.
To be successful you will need some type of reference / registration marks to allow for skew and stretch especially if you are using document scanning or capturing from a camera image.
I am not familiar with Java, but in Python, you can use the imaging library to open the image. Then load the height and the widths, and segment the image into a grid accordingly, by Height/Rows and Width/Cols. Then, just look for black pixels in those regions, or whatever color PIL registers that black to be. This obviously relies on the grid like nature of the data.
Edit:
Doing Edge Detection may also be Fruitful. First apply an edge detection method like something from wikipedia. I have used the one found at archive.alwaysmovefast.com/basic-edge-detection-in-python.html. Then convert any grayscale value less than 180 (if you want the boxes darker just increase this value) into black and otherwise make it completely white. Then create bounding boxes, lines where the pixels are all white. If data isn't terribly skewed, then this should work pretty well, otherwise you may need to do more work. See here for the results: http://imm.io/2BLd
Edit2:
Denis, how large is your dataset and how large are the images? If you have thousands of these images, then it is not feasible to manually remove the borders (the red background and yellow bars). I think this is important to know before proceeding. Also, I think the prewitt edge detection may prove more useful in this case, since there appears to be less noise:
The previous method of segmenting may be applied, if you do preprocess to bin in the following manner, in which case you need only count the number of black or white pixels and threshold after some training samples.

Resources