How to convert from Cairo to Imager? - image

For historical reasons we are forced to use Cairo and Imager together.
Converting from Cairo to Imager and converting it back to Imager for some reason makes the color strange.
The reason for converting to Imager again is to combine with another Imager object after this.
# Create a yellow fill image as Cairo object.
# And output as a png file.
my $testSurface = Cairo::ImageSurface->create(
'argb32',
$width,
$height
);
my $testContext = Cairo::Context->create($testSurface);
$testContext->rectangle(0, 0, $width, $height);
$testContext->set_source_rgba(1.0, 1.0, .0, 1);
$testContext->fill();
# This is a yellow png file.
$testSurface->write_to_png("output/fill_yellow.png");
# Convert the Cairo object to an Imager object.
my $testData = $testSurface->get_data;
my $testImager = Imager->new(
xsize => $width,
ysize => $height,
channels => 4,
);
my $testRes = $testImager->read(
data => $testData,
type => "raw",
xsize => $width,
ysize => $height,
raw_datachannels => 4,
raw_storechannels => 4,
raw_interleave => 0,
);
# Output Imager object as a PNG file.
# ! This PNG file becomes blue unexpectedly
$testRes->write(
file => "output/fill_yellow_imager.png",
type => "png"
);

Cairo is using ARGB for its raw format, while Imager is using RGBA. The difference between them is the order that the samples are stored within each pixel. Cairo also uses premultiplied alpha, while Imager uses non-premultiplied alpha. Neither library seems to have any option to change either of these things.
The sample ordering thing could be fixed fairly easily by re-ordering the bytes within the raw image data, but the premultiplication thing starts to get into the territory of not worth bothering with. Therefore I recommend that you simply save a PNG file in Cairo and load it in Imager. It may be slightly slower, but it's easy to understand and recognize that it's correct.
Below: code from a previous version of this answer that recommended swapping the byte order, before I realized about the premultiplied alpha issue:
for (my $i = 0 ; $i < length($testData) ; $i += 4) {
substr($testData, $i, 4,
substr($testData, $i+1, 3) . substr($testData, $i, 1)
);
}
I don't recommend using it.

Related

Matlab imwrite changed my colour

I'm trying to convert some similar images from gif to png.
You can find two of the pictures here:
https://europa.eu/european-union/about-eu/history/1980-1989_en.
After converting the first gif (for the year 1981), you can see the background colour is the same as before, white, but for the second gif (for the year 1986), the background colour changed to pink. How to fix it?
Below is my code:
file_in = uigetfile('*.*', 'All Files', 'MultiSelect','on');
file_out = cellfun(#(x) cat(2, x(1:(length(x)-3)), 'png'),...
file_in, 'UniformOutput', false);
for i = 1: length(file_in)
[gif,map] = imread (file_in{i});
imwrite (gif, map, file_out{i}, 'Background', [0 0 0]);
end
Matlab have never change the image color after conversion. Thus, if you try to open the 'gif' or 'png' by imshow you will get the same result.
Whatever, if you want to change the background color to white use this code.

Compare two images and highlight differences along on the second image

Below is the current working code in python using PIL for highlighting the difference between the two images. But rest of the images is blacken.
Currently i want to show the background as well along with the highlighted image.
Is there anyway i can keep the show the background lighter and just highlight the differences.
from PIL import Image, ImageChops
point_table = ([0] + ([255] * 255))
def black_or_b(a, b):
diff = ImageChops.difference(a, b)
diff = diff.convert('L')
# diff = diff.point(point_table)
h,w=diff.size
new = diff.convert('RGB')
new.paste(b, mask=diff)
return new
a = Image.open('i1.png')
b = Image.open('i2.png')
c = black_or_b(a, b)
c.save('diff.png')
!https://drive.google.com/file/d/0BylgVQ7RN4ZhTUtUU1hmc1FUVlE/view?usp=sharing
PIL does have some handy image manipulation methods,
but also a lot of shortcomings when one wants
to start doing serious image processing -
Most Python lterature will recomend you to switch
to use NumPy over your pixel data, wich will give
you full control -
Other imaging libraries such as leptonica, gegl and vips
all have Python bindings and a range of nice function
for image composition/segmentation.
In this case, the thing is to imagine how one would
get to the desired output in an image manipulation program:
You'd have a black (or other color) shade to place over
the original image, and over this, paste the second image,
but using a threshold (i.e. a pixel either is equal or
is different - all intermediate values should be rounded
to "different) of the differences as a mask to the second image.
I modified your function to create such a composition -
from PIL import Image, ImageChops, ImageDraw
point_table = ([0] + ([255] * 255))
def new_gray(size, color):
img = Image.new('L',size)
dr = ImageDraw.Draw(img)
dr.rectangle((0,0) + size, color)
return img
def black_or_b(a, b, opacity=0.85):
diff = ImageChops.difference(a, b)
diff = diff.convert('L')
# Hack: there is no threshold in PILL,
# so we add the difference with itself to do
# a poor man's thresholding of the mask:
#(the values for equal pixels- 0 - don't add up)
thresholded_diff = diff
for repeat in range(3):
thresholded_diff = ImageChops.add(thresholded_diff, thresholded_diff)
h,w = size = diff.size
mask = new_gray(size, int(255 * (opacity)))
shade = new_gray(size, 0)
new = a.copy()
new.paste(shade, mask=mask)
# To have the original image show partially
# on the final result, simply put "diff" instead of thresholded_diff bellow
new.paste(b, mask=thresholded_diff)
return new
a = Image.open('a.png')
b = Image.open('b.png')
c = black_or_b(a, b)
c.save('c.png')
Here's a solution using libvips:
import sys
from gi.repository import Vips
a = Vips.Image.new_from_file(sys.argv[1], access = Vips.Access.SEQUENTIAL)
b = Vips.Image.new_from_file(sys.argv[2], access = Vips.Access.SEQUENTIAL)
# a != b makes an N-band image with 0/255 for false/true ... we have to OR the
# bands together to get a 1-band mask image which is true for pixels which
# differ in any band
mask = (a != b).bandbool("or")
# now pick pixels from a or b with the mask ... dim false pixels down
diff = mask.ifthenelse(a, b * 0.2)
diff.write_to_file(sys.argv[3])
With PNG images, most CPU time is spent in PNG read and write, so vips is only a bit faster than the PIL solution.
libvips does use a lot less memory, especially for large images. libvips is a streaming library: it can load, process and save the result all at the same time, it does not need to have the whole image loaded into memory before it can start work.
For a 10,000 x 10,000 RGB tif, libvips is about twice as fast and needs about 1/10th the memory.
If you're not wedded to the idea of using Python, there are a few really simple solutions using ImageMagick:
“Diff” an image using ImageMagick

Reassembling a fragmented image

I have an image that has been broken in to parts, 64 rows by 64 columns. Each image is 256x256px. The images are all PNG. They are named "Image--.png" for example "Image-3-57". The rows and columns numbering start from 0 rather than 1.
How can I assemble this back in to one image? Ideally using BASH and tools (I'm a sysadmin) though PHP would be acceptable as well.
Well, it is not very complicated, if you want to use PHP. What you need is just a few image gunctions - imagecreate and imagecopy. If your PNG is semi transparent, you will also need imagefilledrectangle to create a transparent background.
In code below, I rely on fact, that all chunks are same size - so the pixel size must be able to be divided by the number of chunks.
<?php
$width = 256*64; //height of the big image, pixels
$height = 256*64;
$chunks_X = 64; //Number of chunks
$chunks_Y = 64; //Same for Y
$chuk_size_X = $width/$chunks_X; //Compute size of one chunk, will be needed in copying
$chuk_size_Y = $height/$chunks_Y;
$big = imagecreate($width, $height); //Create the big one
for($y=0; $y<$chunks_Y; $y++) {
for($x=0; $x<chunks_X; $x++) {
$chunk = imagecreatefrompng("Image-$x-$y.png");
imagecopy($big, $chunk,
$x*$chuk_size_X, //position where to place little image
$y*$chuk_size_Y,
0, //where to copy from on little image
0,
$chuk_size_X, //size of the copyed area - whole little image here
$chuk_size_Y,
);
imagedestroy($chunk); //Don't forget to clear memory
}
}
?>
This is just a draft. I'm not sure about all theese xs and ys as well as ather details. It is late and I'm tired.

How to get Intensity Pointer of the gray Image in opencv

I have a binary file of 16 bit Intensity of the image. I have read this data in short array. And create 16 bit gray Image using the following code.
IplImage *img=cvCreateImage( cvSize( Image_width, Image_height ), IPL_DEPTH_16S, 1 );
cvSetData(img,Data, sizeof(short )*Image_width);
Where Data is short array.
Then I set ROI for this Image using this function
cvSetImageROI(img, cvRect(crop_start.x, crop_start.y, crop_width, Image_height));
and ROI is being set successfully.
Now after setting the ROI I want to access the Intensity of the Image means I want pointer of the intensity of the cropped Image. I have tried this code to access the intensity
short *crop_Imagedata=(short *)img->imageData;
But This pointer is not giving the right Intensity as I have checked that by debugging the code.
Can any body please tell me how can I get pointer of the Image Intensity.
Thanks in Advance
Hello I have tryed the following to find what maybe you wannt to do:
IplImage *img=cvCreateImage( cvSize( 15, 15 ), IPL_DEPTH_16S, 1 );
cvSet(img,cvScalar(15));//sets all values to 15
cvSetImageROI(img, cvRect(4, 0, 10, 15));
short *crop_Imagedata=(short *)img->imageData;
((*crop_Imagedata) == 15) // true
The value that you will get is not in the roi! imageData of the IplImage structure is just a simple pointer and not a function !!! the rois of opencv is something that isn t that well documented and easy to use in my opinion. Maybe all algorithms of opencv use the rois somehow. I use them to but there is no such automatic function with the standard iplimage structure to simple use them.
If you wannt more magic try to use the new cv::Mat object...
but if you still wann t to use rois then you will have to allways to use
CvRect roi = cvGetImageROI(img);
method to check all the time the position. After that you will have to add the roi offset:
((short*)(img->imageData + (roi.y+ypos)*img-widthStep))[roi.x+xpos]
remember the standard Opencv is a C library not a C++ ! btw when mixing cv::Mat rois can be a bit annoying to. To copy an iplimage to a cv::Mat without roi I have to do the following:
CvRect roitmp = cvGetImageROI(ilimage);
cvResetImageROI(ilimage);
cv::Mat tmp = cv::Mat(ilimage).clone();
cvSetImageROI(ilimage,roitmp);
maybe here someone knows the right way of working with rois...

Prevent GDI+ PNG Encoder from adding Gamma information to a 1-bit PNG

I wonder if its possible to instruct the Imaging PNG Encoder not to add any gamma and chroma informations to a 1-bit PNG.
I am creating a 2 color palette for the image
ColorPalette* pal = (ColorPalette*)CoTaskMemAlloc(sizeof(ColorPalette) + 2 * sizeof(ARGB));
pal->Count = 2;
pal->Flags = 0;
pal->Entries[0] = MAKEARGB(0,0,0,0);
pal->Entries[1] = MAKEARGB(0,255,255,255);
if (FAILED(res = sink->SetPalette(pal))) {
return res;
}
CoTaskMemFree(pal);
and then just
BitmapData bmData;
bmData.Height = bm.bmHeight;
bmData.Width = bm.bmWidth;
bmData.Scan0 = bm.bmBits;
bmData.PixelFormat = PixelFormat1bppIndexed;
UINT bitsPerLine = imageInfo.Width * bm.bmBitsPixel;
UINT bitAlignment = sizeof(LONG) * 8;
UINT bitStride = bitAlignment * (bitsPerLine / bitAlignment); // The image buffer is always padded to LONG boundaries
if ((bitsPerLine % bitAlignment) != 0) bitStride += bitAlignment; // Add a bit more for the leftover values
bmData.Stride = bitStride / 8;
if (FAILED(res = sink->PushPixelData(&rect, &bmData, TRUE))) {
return res;
}
The resulting PNG image is way to large and contains the following useless headers:
sRGB,gAMA,cHRM
I was actually only expecting PLTE not sRGB. How do I have to setup the encoder to skip gamma and chroma calculations?
I'm also interested to know if it's possible. I use gdi+ in a c++ program to generate png's for a website and the png's have different colors than the css although I put in the exact same values. By removing the sRGB stuff, that could solve the gamma problem in most browsers.
I hope there is a solution for this!
I have resolved this by implementing the FreeImage library (http://freeimage.sourceforge.net/).
I create the bitmap with GDI+, I lock its pixeldata, create a freeimage bitmap, lock it too and copy the pixels.
Then I make freeimage save it to a PNG and voila... correct gamma information, good in every browser.
It's a little bit more overhead (although I have a feeling that FreeImage saves the images much faster than GDI+, making the overal process even faster). But of course, you will need an extra library with dll with your project.

Resources