How to convert image to GIF with only two color index in Imagick - imagick

I want to convert an image to GIF format with only two color, black and white
so every pixel in output image will be black or white
does anybody know any way to convert in imagick ?

Although the final conversion to write out a two-color gif are the same, there are subtly different ways of converting an image to be two color black and white.
Here are some methods below:
Helper method to actually force images to be pixels of values 0,0,0 and 255,255,255
function forceBlackAndWhite(Imagick $imagick, $ditherMethod = \Imagick::DITHERMETHOD_NO)
{
$palette = new Imagick();
$palette->newPseudoImage(1, 2, 'gradient:black-white');
$palette->setImageFormat('png');
//$palette->writeImage('palette.png');
// Make the image use these palette colors
$imagick->remapImage($palette, $ditherMethod);
$imagick->setImageDepth(1);
}
Just use the remap to palette to force the image to 2 colors without any dithering.
function twoColorPaletteOnly()
{
$imagick = new Imagick(__DIR__."/../images/Biter_500.jpg");
forceBlackAndWhite($imagick, \Imagick::DITHERMETHOD_NO);
$imagick->setImageFormat('gif');
$imagick->writeImage("./outputPalette.gif");
}
Palette output:
Using the http://phpimagick.com/Imagick/posterizeImage allows different control over the dithering process.
function twoColorViaPosterize()
{
$imagick = new Imagick(__DIR__."/../images/Biter_500.jpg");
$imagick->transformImageColorspace(\Imagick::COLORSPACE_GRAY);
$imagick->posterizeImage(2, \Imagick::DITHERMETHOD_RIEMERSMA);
forceBlackAndWhite($imagick);
$imagick->setImageFormat('gif');
$imagick->writeImage("./outputPosterize.gif");
}
Posterize output:
The thresholdImage function allows us to control at what at 'level' the image changes from black to white.
function twoColorViaThreshold()
{
$imagick = new Imagick(__DIR__."/../images/Biter_500.jpg");
$imagick->transformImageColorspace(\Imagick::COLORSPACE_GRAY);
$imagick->thresholdImage(0.5 * \Imagick::getQuantum());
forceBlackAndWhite($imagick);
$imagick->setImageFormat('gif');
$imagick->writeImage("./outputThreshold.gif");
}
Threshold output:
Using the blackThresholdImage and whiteThresholdImage functions allows us to control the color threshold per channel
function twoColorViaColorThreshold()
{
$imagick = new Imagick(__DIR__."/../images/Biter_500.jpg");
$thresholdColor = "RGB(127, 100, 100)";
$imagick->blackThresholdImage($thresholdColor);
$imagick->whiteThresholdImage($thresholdColor);
forceBlackAndWhite($imagick);
$imagick->setImageFormat('gif');
$imagick->writeImage("./outputColorThreshold.gif");
}
colorThreshold output
Extracting a single image channel can produce a 'cleaner' looking output image.
function twoColorViaColorChannelThreshold()
{
$imagick = new Imagick(__DIR__."/../images/Biter_500.jpg");
$imagick->separateImageChannel(\Imagick::CHANNEL_RED);
$imagick->thresholdImage(0.5 * \Imagick::getQuantum());
forceBlackAndWhite($imagick);
$imagick->setImageFormat('gif');
$imagick->writeImage("./outputColorChannelThreshold.gif");
}
colorChannelThreshold
We can combine the RGB channels more precisely using the colorMatrixImage function, which gives us complete control over how the separate R G B values should affect the output image.
function twoColorViaColorMatrixChannelThreshold()
{
$imagick = new Imagick(__DIR__."/../images/Biter_500.jpg");
// The only
$colorMatrix = [
0.6, 0.2, 0.2, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 1.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0,
];
// The intensity of the red channel after applying this color matrix takes
// the values from the pixels before the transformation of:
// 60% of the red, 20% blue, 20% green
$imagick->colorMatrixImage($colorMatrix);
$imagick->separateImageChannel(\Imagick::CHANNEL_RED);
$imagick->thresholdImage(0.5 * \Imagick::getQuantum());
forceBlackAndWhite($imagick);
$imagick->setImageFormat('gif');
$imagick->writeImage("./outputColorMatrixChannelThreshold.gif");
}
colorMatrixChannelThreshold output
The output images for the code above are:

Related

How to calculate rotation just like in MotionBuilder

Problem:
My goal is to write a code, that rotates the root joint of a bvh, θ degrees around global y axis3, and keeps values in the range of -180 to 180 (just like MotionBuilder does). I have tried to rotate a joint using euler, quaternions, matrices (considering the rotational order of a bvh) but I haven't yet figured out how to get the correct values. MotionBuilder calculates the values x,y,z so they are valid for the bvh file. I would like to write a code that calculates the rotation x,y,z for a joint, just like in MotionBuilder.
Example:
Initial: Root rotation: [x= -169.56, y=15.97, z=39.57]
After manually rotating about 45 degrees: Root rotation: [x=-117.81, y=49.37, z=70.15]
global y axis:
To rotate a node around the world Y axis any number of degrees the following works (https://en.wikipedia.org/wiki/Rotation_matrix):
import math
from pyfbsdk import *
angle = 45.0
radians = math.radians(angle)
root_matrix = FBMatrix()
root.GetMatrix(root_matrix, FBModelTransformationType.kModelRotation, True)
transformation_matrix = FBMatrix([
math.cos(radians), 0.0, math.sin(radians), 0.0,
0.0, 1.0, 0.0, 0.0,
-math.sin(radians), 0.0, math.cos(radians), 0.0,
0.0, 0.0, 0.0, 1.0
])
result_matrix = root_matrix * transformation_matrix
root.SetMatrix(result_matrix , FBModelTransformationType.kModelRotation, True)
If there are any Pre-Rotations on the root node the process is more complex and you can try setting the Rotations using the SetVector with the LRMToDof method.
result_vector = FBVector3d()
root.LRMToDof(result_vector, result_matrix)
root.SetVector(result_vector, FBModelTransformationType.kModelRotation, True)

Image filter kernel to expand 16-235 limited color range

Is it possible to write a 5x5 kernel to process the limited color range into the full range?
This is my sample bitonal kernel, and I don't know what values to use and where to achieve this color expansion:
Grayscale
{ 0.3, 0.3, 0.3, 0.0, 0.0 }
{ 0.6, 0.6, 0.6, 0.0, 0.0 }
{ 0.1, 0.1, 0.1, 0.0, 0.0 }
{ 0.0, 0.0, 0.0, 1.0, 0.0 }
{ 0.0, 0.0, 0.0, 0.0, 1.0 }
I would like RGB color expansion RGB 16-235 => 0-255
However i need the kernel matrix because I am not processing the image but I'm passing the matrix to a windows API function (undocumented: SetMagnificationDesktopColorEffect).
I cannot do a simple subtract/divide/multiply on the pixels. I do not have them.
You can basically do it without kernel by substracting 16 from your image and then dividing it by 219. Then you will have normalized to 1 image which you have to multiply by 255 to get 255 intensity range representation.

Strange behavior of alpha without blending in WebGL

I found strange behavior of WebGL when it is rendering with blending turned off. I reproduced it on this simplest tutorial.
Just change strings:
gl_FragColor = vec4(1.0, 1.0, 1.0, 1.0);
to
gl_FragColor = vec4(0.0, 0.0, 0.0, 0.5);
and
gl.clearColor(0.0, 0.0, 0.0, 1.0);
to
gl.clearColor(1.0, 1.0, 1.0, 1.0);
So, since blending is turned off, I supposed to see black shapes on white background (alpha 0.5 of the pixel shouldn't make influence). But I see gray shapes on white backgound. I believe I missed something, but I can't undertand what. Any ideas?
P.S. gl.disable(gl.BLEND) doesn't change the result.
This is basically already answered here
Alpha rendering difference between OpenGL and WebGL
What you're seeing is that WebGL canvases are, by default, blended with the background. Either the background color of the canvas or whatever it's a child of. The default background color for HTML is white so if you draw with [0.0, 0.0, 0.0, 0.5] that's 50% alpha black blended with the white webpage.
See the link above for how to fix it.

Texturing OpenGL/WebGL rectangle

I have drawn a rectangle, which spins. Also I have set the texture for it, but it does look like very ugly (very low resolution).
The table of vertex for rectangle is the next:
1.0, 1.0, 0.0,
-1.0, 1.0, 0.0,
1.0, -1.0, 0.0,
-1.0, -1.0, 0.0
This is the coordinates for the texture mapping the above rectangle:
0.0, 0.0,
1.0, 0.0,
1.0, 1.0,
0.0, 1.0,
The size of texture is 512x512 (it's too large!!!, so there mustn't be such problems with the size exactly).
The full source code locates here:
http://pastebin.com/qXJFNe1c
I clearly understand, that is my fault, but I don't get where exactly is the fault.
PS
I think, that such a problem isn't related strongly to WebGL, I think that some pure OpenGL developers could give me a piece of advice also.
If you want to test it live, you may check it via:
http://goo.gl/YpXyPl
When testing, [.WebGLRenderingContext]RENDER WARNING: texture bound to texture unit 0 is not renderable. It maybe non-power-of-2 and have incompatible texture filtering or is not 'texture complete' 77.246.234.123:8893/plane/:1, but it does render a texture at least, so maybe that's about something else.
Also the viewport seems to be 300 x 150. The canvas rendering width/height does not match the width/height within the html page. Try this:
function updateCanvasSize(canvas)
{
if (canvas.width != canvas.clientWidth || canvas.height != canvas.clientHeight)
{
canvas.width = canvas.clientWidth;
canvas.height = canvas.clientHeight;
gl.viewportWidth = canvas.width;
gl.viewportHeight = canvas.height;
//might want to update the projection matrix to fix the new aspect ratio too
}
}
and in the a draw or update function (I doubt this would impact performance)...
var canvas = document.getElementById("canvas-main"); //or store "canvas" globally
updateCanvasSize(canvas);
The incorrect canvas size occurs because at the time webGLStart is called, the canvas is actually 300x150 for whatever reason. Either 1. You actually want a fixed size render target (give the canvas widht/height in pixels and all is well) or 2. you want it to take up the whole window, in which case the user may want to resize the window which needs to be handled (js probably has a resize event you could use too instead of polling).

How color attributes work in VBO?

I am coding to OpenGL ES 2.0 (Webgl). I am using VBOs to draw primitives. I have vertex array, color array and array of indices. I have looked at sample codes, books and tutorial, but one thing I don't get - if color is defined per vertex how does it affect the polygonal surfaces adjacent to those vertices? (I am a newbie to OpenGL(ES))
I will explain with an example. I have a cube to draw. From what I read in OpenGLES book, the color is defined as an vertex attribute. In that case, if I want to draw 6 faces of the cube with 6 different colors how should I define the colors. The source of my confusion is: each vertex is common to 3 faces, then how will it help defining a color per vertex? (Or should the color be defined per index?). The fact that we need to subdivide these faces into triangles, makes it harder for me to understand how this relationship works. The same confusion goes for edges. Instead of drawing triangles, let's say I want to draw edges using LINES primitives. Each edge of different color. How am I supposed to define color attributes in that case?
I have seen few working examples. Specifically this tutorial: http://learningwebgl.com/blog/?p=370
I see how color array is defined in the above example to draw a cube with 6 different colored faces, but I don't understand why is defined that way. (Why is each color copied 4 times into unpackedColors for instance?)
Can someone explain how color attributes work in VBO?
[The link above seems inaccessible, so I will post the relevant code here]
cubeVertexPositionBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, cubeVertexPositionBuffer);
vertices = [
// Front face
-1.0, -1.0, 1.0,
1.0, -1.0, 1.0,
1.0, 1.0, 1.0,
-1.0, 1.0, 1.0,
// Back face
-1.0, -1.0, -1.0,
-1.0, 1.0, -1.0,
1.0, 1.0, -1.0,
1.0, -1.0, -1.0,
// Top face
-1.0, 1.0, -1.0,
-1.0, 1.0, 1.0,
1.0, 1.0, 1.0,
1.0, 1.0, -1.0,
// Bottom face
-1.0, -1.0, -1.0,
1.0, -1.0, -1.0,
1.0, -1.0, 1.0,
-1.0, -1.0, 1.0,
// Right face
1.0, -1.0, -1.0,
1.0, 1.0, -1.0,
1.0, 1.0, 1.0,
1.0, -1.0, 1.0,
// Left face
-1.0, -1.0, -1.0,
-1.0, -1.0, 1.0,
-1.0, 1.0, 1.0,
-1.0, 1.0, -1.0,
];
gl.bufferData(gl.ARRAY_BUFFER, new WebGLFloatArray(vertices), gl.STATIC_DRAW);
cubeVertexPositionBuffer.itemSize = 3;
cubeVertexPositionBuffer.numItems = 24;
cubeVertexColorBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, cubeVertexColorBuffer);
var colors = [
[1.0, 0.0, 0.0, 1.0], // Front face
[1.0, 1.0, 0.0, 1.0], // Back face
[0.0, 1.0, 0.0, 1.0], // Top face
[1.0, 0.5, 0.5, 1.0], // Bottom face
[1.0, 0.0, 1.0, 1.0], // Right face
[0.0, 0.0, 1.0, 1.0], // Left face
];
var unpackedColors = []
for (var i in colors) {
var color = colors[i];
for (var j=0; j < 4; j++) {
unpackedColors = unpackedColors.concat(color);
}
}
gl.bufferData(gl.ARRAY_BUFFER, new WebGLFloatArray(unpackedColors), gl.STATIC_DRAW);
cubeVertexColorBuffer.itemSize = 4;
cubeVertexColorBuffer.numItems = 24;
cubeVertexIndexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, cubeVertexIndexBuffer);
var cubeVertexIndices = [
0, 1, 2, 0, 2, 3, // Front face
4, 5, 6, 4, 6, 7, // Back face
8, 9, 10, 8, 10, 11, // Top face
12, 13, 14, 12, 14, 15, // Bottom face
16, 17, 18, 16, 18, 19, // Right face
20, 21, 22, 20, 22, 23 // Left face
]
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, new WebGLUnsignedShortArray(cubeVertexIndices), gl.STATIC_DRAW);
cubeVertexIndexBuffer.itemSize = 1;
cubeVertexIndexBuffer.numItems = 36;
The way I like to look at it is that each vertex is not a point in space but rather a bundle of attributes. These generally (but not always) include its location and may include its colour, texture coordinates, etc., etc., etc. A triangle (or line, or other primitive) is defined by specifying a set of vertices, then generating values for each attribute at each pixel by linearly interpolating the per-vertex values.
As Liam says, and as you've realised in your comment, this means that if you want to have a point in space that is used by a vertex for multiple primitives -- for example, the corner of a cube -- with other non-location attributes varying on a per-primitive basis, you need a separate vertex for each combination of attributes.
This is wasteful of memory to some degree -- but the complexity involved in doing it any other way would make things worse, and would require the graphics hardware to do a lot more work unpacking and repacking data. To me, it feels like the waste is comparable to the waste we get by using 32-bit RGBA values for each pixel in our video memory instead of keeping a "palette" lookup table of every colour we want to use and then just storing an index into that per-pixel (which is, of course, what we used to do when RAM was more expensive).
If the color of a set of polygons is the same then a vertex shared by all of the polygons, along with its color, can be defined once and shared by the polygons (using an index).
If the color of the polygons is different then even though the position of a vertex may be common, the color is not, and therefore the vertex as a whole cannot be shared. You will need to define the vertex for each polygon.

Resources