Convert grayscale image to cross-hatch shading - image

I have a grayscale image that I would like to convert into cross-hatched shading so it can be printed by a black and white only printer.
Is this possible with any postprocessing tools, such as ImageMagick's convert?
TIA.

Yes imagemagick is the tool you want. First you may look at the monochrome section to see the basics, and depending on the effect you want, you can also try ordered dithering to get that “newspaper” effect, or custom thresholds to get a shading effect however you like; a basic example is written in the section to give you an effect like:

Related

With ImageMagick, why do I get a halo when flattening a PSD with alpha?

I have a large number of PSD files which contain semi-transparent layers. These layers are not getting flattened correctly regardless of what flags I use via convert or mogrify
The simplest form looks as follows:
convert -background transparent source.psd -flatten output.png
Here is what the source image looks like in Photoshop. Note that this is a drop shadow layer and not a layer effect:
Here is how it comes out:
This may not be obvious from the photoshop background, so here it is in laid over a grey background:
Source:
Output:
EDIT:
I dug a bit into what is happening in the numbers. For the initial source image, the shadow is completely black and the alpha fades in. For the output image, the alpha is not as high, but it compensates by inaccurately lightening the image in a somewhat bumpy fashion. Its almost as if its pre-multiplied, but its taking the background as white?
Here is a strait RGB render without alpha multiplied in:
Source:
Output:
In other words, the RBG values are not at all being preserved. Alpha is being dimmed, but not distorted as theses values are. My guess would be some sort of rounding error based on trying to extrapolate the color from the alpha as though it is trying to "unpre-multiply" the values. Any help is appreciated.
Short answer is it is fixed in V7 of the software (I think). I run mac and the installer for V7 doesn't work well at all and it appears unstable. After running it on an Ubuntu VM, it works good. I have also confirmed with another user that V6 has this problem and V7 does not on Windows

Matlab rgb2hsv changes image

I am trying to convert an array of RGB values into an array of HSV values in Matlab. I am running the following code:
pic = imread('ColoradoMountains.jpg');
pic = rgb2hsv(pic);
imwrite(pic,'pic.jpg')
But the image that gets written has completely different colors. I've been trying to set the color map to hsv, but I'm not sure if that even makes sense. Nothing on the internet comes up with my keywords, can someone please point me in the right direction?
You can specify the colormap that imwrite is supposed to use. Try this:
imwrite(pic,colormap('HSV'),'pic.png');
Here's the documentation for imwrite: http://www.mathworks.com/help/matlab/ref/imwrite.html
In Matlab you have to distinguish between an indexed image and an 3-channel image. An indexed image is a n*m*1 image where each value of the [0,1] range is associated to a colour. This association is called colour map, which could be for example a unit circle in HSV or RGB. This can be written using imwrite with the map parameter, but this is not what you want.
What you obviously want is an HSV-encoded image, which means the three rgb-channels are converted to three hsv channels. This is (as far as I know) not possible. If you take a look into the documentation of imwrite, you can see how CMYK-Encoded images are written, this is done implicit passing a n*m*4 image. Is there any of the standard file formats which supports HSV? If so I'll take a closer look at this format.

Algorithm for change specific color of when rendering

Given that I have a texture file generated from a font bitmap builder like this:
Now I load it into my program. Then, I want to write my text with different colors rather than black such as blue, pink, ... from that original texture file.
What trick or algorithm should I use?
Any one can help me, please
Thank you very much.
If you have access to a white version with transparency, you can use hardware vertex blending to get any color you want, but as I said, the text has to be white. Doing it in a software loop is possible, but only with brute-force.. Even if you use a library, it will only be scanning every pixel in, converting it, and re-writing it again... Which is slow. So use hardware vertex coloring.

Remove background color in image processing for OCR

I am trying to remove background color so as to improve the accuracy of OCR against images. A sample would look like below:
I'd keep all letters in the post-processed image while just removing the light purple color textured background. Is it possible to use some open source software such as Imagemagick to convert it to a binary image (black/white) to achieve this goal? What if the background has more than one color? Would the solution be the same?
Further, what if I also want to remove the purple letters (theater name) and the line so as to only keep the black color letters? Simple cropping might not work because the purple letters could appear at other places as well.
I am looking for a solution in programming, rather than via tools like Photoshop.
You can do this using GIMP (or any other image editing tool).
Open your image
Convert to grayscale
Duplicate the layer
Apply Gaussian blur using a large kernel (10x10) to the top layer
Calculate the image difference between the top and bottom layer
Threshold the image to yield a binary image
Blurred image:
Difference image:
Binary:
If you're doing it as a once-off, GIMP is probably good enough. If you expect to do this many times over, you could probably write an imagemagick script or code up your approach using something like Python and OpenCV.
Some problems with the above approach:
The purple text (CENTURY) gets lost because it isn't as contrasting as the other text. You could work your way around it by thresholding different parts of the image differently, or by using local histogram manipulation methods
The following shows a possible strategy for processing your image, and OCR it
The last step is doing an OCR. My OCR routine is VERY basic, so I'm sure you may get better results.
The code is Mathematica code.
Not bad at all!
In Imagemagick, you can use the -lat function to do that.
convert image.jpg -colorspace gray -negate -lat 50x50+5% -negate result.jpg
convert image.jpg -colorspace HSB -channel 2 -separate +channel \
-white-threshold 35% \
-negate -lat 50x50+5% -negate \
-morphology erode octagon:1 result2.jpg
You can apply blur to the image, so you get almost clear background. Then divide each color component of each pixel of original image by the corresponding component of pixel on the background. And you will get text on white background. Additional postprocessing can help further.
This method works in the case if text is darker then the background (in each color component). Otherwise you can invert colors and apply this method.
If your image is captured as RGB, just use the green image or quickly convert the bayer pattern which is probably #misha's convert to greyscale solutions probably do.
Hope this helps someone
Using one line code you can get is using OpenCV and python
#Load image as Grayscale
im = cv2.imread('....../Downloads/Gd3oN.jpg',0)
#Use Adaptivethreshold with Gaussian
th = cv2.adaptiveThreshold(im,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,11,2)
Here's the result
Here's the link for Image Thresholding in OpenCV

Photoshop blending mode to OpenGL ES without shaders

I need to imitate Photoshop blending modes ("multiply", "screen" etc.) in my OpenGL ES 1.1 code (without shaders).
There are some docs on how to do this with HLSL:
http://www.nathanm.com/photoshop-blending-math/ (archive)
http://mouaif.wordpress.com/2009/01/05/photoshop-math-with-glsl-shaders/
I need at least working Screen mode.
Are there any implementations on fixed pipeline I may look at?
Most photoshop blend-modes are based upon the Porter-Duff blendmodes.
These requires that all your images (textures, renderbuffer) are in premultiplied color-space. This is usually done by multiplying all pixel-values with the alpha-value before storing them in a texture. E.g. a full transparent pixel will look like black in non-premultiplied color space. If you're unfamiliar with this color-space spend an hour or two reading about it on the web. It's a neat and good concept and required for photoshop-like compositions.
Anyway - once you have your images in that format you can enable SCREEN using:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_COLOR)
The full MULTIPLY mode is not possible with the OpenGL|ES pipeline. If you only work with full opaque pixels you can fake it using:
glBlendFunc(GL_ZERO, GL_SRC_COLOR)
The results for transparent pixels either in your texture and your framebuffer will be wrong though.
you should try this:
glBlendFunc(GL_DST_COLOR, GL_ONE_MINUS_SRC_ALPHA)
This looks like multiplying to me on the iPhone / OpenGL ES
Your best place to start is to pick up a copy of the Red Book and read through the chapters on on materials and blending modes. It has a very comprehensive and clear explanation of how the 'classic' OpenGL blending functions work.
I have found that using this:
glDepthFun( GL_LEQUAL);
was all need to get a screen effect, at least it worked well on my project.
I am not sure why this works, but if someone knows please share.

Resources