With ImageMagick, why do I get a halo when flattening a PSD with alpha? - image

I have a large number of PSD files which contain semi-transparent layers. These layers are not getting flattened correctly regardless of what flags I use via convert or mogrify
The simplest form looks as follows:
convert -background transparent source.psd -flatten output.png
Here is what the source image looks like in Photoshop. Note that this is a drop shadow layer and not a layer effect:
Here is how it comes out:
This may not be obvious from the photoshop background, so here it is in laid over a grey background:
Source:
Output:
EDIT:
I dug a bit into what is happening in the numbers. For the initial source image, the shadow is completely black and the alpha fades in. For the output image, the alpha is not as high, but it compensates by inaccurately lightening the image in a somewhat bumpy fashion. Its almost as if its pre-multiplied, but its taking the background as white?
Here is a strait RGB render without alpha multiplied in:
Source:
Output:
In other words, the RBG values are not at all being preserved. Alpha is being dimmed, but not distorted as theses values are. My guess would be some sort of rounding error based on trying to extrapolate the color from the alpha as though it is trying to "unpre-multiply" the values. Any help is appreciated.

Short answer is it is fixed in V7 of the software (I think). I run mac and the installer for V7 doesn't work well at all and it appears unstable. After running it on an Ubuntu VM, it works good. I have also confirmed with another user that V6 has this problem and V7 does not on Windows

Related

How to split a transparent PNG into 2 separated images with imagemagick

Recently, I read an interesting optimization technique to optimize transparent PNG images.
The idea was to split a transparent PNG image into 2 parts : PNG 8 bit with color information and PNG 24 with transparency, and to merge it on the client side. It will drastically reduce the size of the image. In the article example made with Photoshop, but I'm pretty sure, we could make it automatically with imagemagick.
So, the question is : how to do split a PNG image with imagemagick in such way ?
The article talks about "dirty transparency" which means the colour values of transparent pixels, although not visible, are retained in the image - they are just made invisible by the alpha layer.
These values, because they continue to contain the colour information, prevent the PNG optimiser from encoding them efficiently. You can achieve what the article suggests in this respect within ImageMagick by using:
convert image.png ... -alpha background result.png
That will make all transparent pixels have the same colour (your background colour) and then the PNG encoder will be be able to optimise them more readily since the values repeat over and over again.
See last part of this answer.

Algorithm behind PhotoPaint's "Subtract" overlay mode?

In Corel PhotoPaint, when you overlay two images using the "Subtract" mode instead of "Normal", you will get more saturated, "neater" colors in the darker areas from the top image. Does anyone know what the algorithm behind this overlaying method is? For instance, I'm looking into emulating it in Objective-C as well as PHP.
For comparison, I created an overlay image of a blurred black center circle which in the top, uses the Normal overlaying mode, and in the bottom, uses the Subtract mode. The normal mode will cause the resulting darker area to look much more gray.
Normal
Subtract
Exporting this CPT file to PSD and opening in Photoshop, the Subtract mode is not available and is lost, so I'm not even sure what it's called in Photoshop.
Thanks for any help! (Original photo CC-licensed by iPyo.)
When combining two images you will have varying options to do so. The general algorithm for such a combination is
for each pixel in resultImage
resultImage[pixel] = sourceA[pixel] OP sourceB[pixel]
Well and then you choose OP. In your questions case thats a '-' (subtraction).
But it can be also +,*,/, MOD, DIV etc.
Usually you will also want to perform some kind of range checking so the pixel intensities of your result image won't over- or underflow. But well then you also might want to do such a thing intentionally.

How do I know if an image is "Premultiplied Alpha"?

I heard that only premultiplied alpha is needed when doing layer blending etc. How do I know if my original image is premultiplied alpha?
You can't.
The only thing that you can check is if it's not premultiplied. To do it go over all the pixels and see if there is a color-value that has a higher value than the alpha would permit if(max(col.r,col.g,col.b) > 255*alpha)//not premul. Any other cases are ambiguous and could or could not be premultiplied. Your best guess is probably to assume that they aren't as that's the case for most PNGs.
Edit: actually, not even the code that I posted would work as there are a lot of PNGs out there with white matte, so the image would have to include parts that have an alpha of 0 to determine the matte color first.
Android Bitmap stores images loaded from PNG with premultiplied alpha. You can't get non-premultiplied original colours from it in usual way.
In order to load images without RGB channels being premultiplied I have to use 3rd party PNGDecoder from here: http://twl.l33tlabs.org/#downloads

Remove background color in image processing for OCR

I am trying to remove background color so as to improve the accuracy of OCR against images. A sample would look like below:
I'd keep all letters in the post-processed image while just removing the light purple color textured background. Is it possible to use some open source software such as Imagemagick to convert it to a binary image (black/white) to achieve this goal? What if the background has more than one color? Would the solution be the same?
Further, what if I also want to remove the purple letters (theater name) and the line so as to only keep the black color letters? Simple cropping might not work because the purple letters could appear at other places as well.
I am looking for a solution in programming, rather than via tools like Photoshop.
You can do this using GIMP (or any other image editing tool).
Open your image
Convert to grayscale
Duplicate the layer
Apply Gaussian blur using a large kernel (10x10) to the top layer
Calculate the image difference between the top and bottom layer
Threshold the image to yield a binary image
Blurred image:
Difference image:
Binary:
If you're doing it as a once-off, GIMP is probably good enough. If you expect to do this many times over, you could probably write an imagemagick script or code up your approach using something like Python and OpenCV.
Some problems with the above approach:
The purple text (CENTURY) gets lost because it isn't as contrasting as the other text. You could work your way around it by thresholding different parts of the image differently, or by using local histogram manipulation methods
The following shows a possible strategy for processing your image, and OCR it
The last step is doing an OCR. My OCR routine is VERY basic, so I'm sure you may get better results.
The code is Mathematica code.
Not bad at all!
In Imagemagick, you can use the -lat function to do that.
convert image.jpg -colorspace gray -negate -lat 50x50+5% -negate result.jpg
convert image.jpg -colorspace HSB -channel 2 -separate +channel \
-white-threshold 35% \
-negate -lat 50x50+5% -negate \
-morphology erode octagon:1 result2.jpg
You can apply blur to the image, so you get almost clear background. Then divide each color component of each pixel of original image by the corresponding component of pixel on the background. And you will get text on white background. Additional postprocessing can help further.
This method works in the case if text is darker then the background (in each color component). Otherwise you can invert colors and apply this method.
If your image is captured as RGB, just use the green image or quickly convert the bayer pattern which is probably #misha's convert to greyscale solutions probably do.
Hope this helps someone
Using one line code you can get is using OpenCV and python
#Load image as Grayscale
im = cv2.imread('....../Downloads/Gd3oN.jpg',0)
#Use Adaptivethreshold with Gaussian
th = cv2.adaptiveThreshold(im,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,11,2)
Here's the result
Here's the link for Image Thresholding in OpenCV

Photoshop blending mode to OpenGL ES without shaders

I need to imitate Photoshop blending modes ("multiply", "screen" etc.) in my OpenGL ES 1.1 code (without shaders).
There are some docs on how to do this with HLSL:
http://www.nathanm.com/photoshop-blending-math/ (archive)
http://mouaif.wordpress.com/2009/01/05/photoshop-math-with-glsl-shaders/
I need at least working Screen mode.
Are there any implementations on fixed pipeline I may look at?
Most photoshop blend-modes are based upon the Porter-Duff blendmodes.
These requires that all your images (textures, renderbuffer) are in premultiplied color-space. This is usually done by multiplying all pixel-values with the alpha-value before storing them in a texture. E.g. a full transparent pixel will look like black in non-premultiplied color space. If you're unfamiliar with this color-space spend an hour or two reading about it on the web. It's a neat and good concept and required for photoshop-like compositions.
Anyway - once you have your images in that format you can enable SCREEN using:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_COLOR)
The full MULTIPLY mode is not possible with the OpenGL|ES pipeline. If you only work with full opaque pixels you can fake it using:
glBlendFunc(GL_ZERO, GL_SRC_COLOR)
The results for transparent pixels either in your texture and your framebuffer will be wrong though.
you should try this:
glBlendFunc(GL_DST_COLOR, GL_ONE_MINUS_SRC_ALPHA)
This looks like multiplying to me on the iPhone / OpenGL ES
Your best place to start is to pick up a copy of the Red Book and read through the chapters on on materials and blending modes. It has a very comprehensive and clear explanation of how the 'classic' OpenGL blending functions work.
I have found that using this:
glDepthFun( GL_LEQUAL);
was all need to get a screen effect, at least it worked well on my project.
I am not sure why this works, but if someone knows please share.

Resources