I recently asked how to convert Float32 or Uint8 arrays into images in the Images package. I got an answer for the Float32 case, but am still having trouble figuring out how to save a Uint8 array.
As an example, let's create a random Uint8 array using the traditional Matlab scheme where the dimensions are (m,n,3):
array = rand(Uint8, 50, 50, 3);
img = convert(Image, array);
Using the same approach as works for the Float32 case,
imwrite(img, "out.png")
fails with message
ERROR: method 'mapinfo' has no method matching mapinfo(::Type{ImageMagick}, ::Image{Uint8, 3, Image{Uint8, 3, Array{Uint8, 3}}}).
I checked the documentation, and it says
If data encodes color information along one of the dimensions of the array (as opposed to using a ColorValue array, from the Color.jl package), be sure to specify the "colordim" and "colorspace" in properties.
However, inspecting the img object previously created shows that it has colordim = 3 and colorspace = RGB already set up, so this can't be the problem.
I then searched the documentation for all instances of MapInfo. In core.md there is one occurrence:
scalei: a property that controls default contrast scaling upon display. This should be a MapInfo value, to be used for setting the contrast upon display. In the absence of this property, the range 0 to 1 will be used.
But there was no information on what exactly a MapInfo object is, so I looked further, and in function_reference.md it says:
Here is how to directly construct the major concrete MapInfo types:
MapNone(T), indicating that the only form of scaling is conversion to type T. This is not very safe, as values "wrap around": for example, converting 258 to a Uint8 results in 0x02, which would look dimmer than 255 = 0xff.
...
and some other examples. So I tried to specify scalei = MapNone(Uint8) as follows:
img2 = Image(img, colordim = 3, colorspace = "RGB", scalei = MapNone(Uint8));
imwrite(img, "out.png")
but got the same error again.
How do you encode Uint8 image data using Images in Julia?
You can convert back and forth between arrays of primitive types such as UInt8 and arrays of color types. These conversions are achieved in a unified way via two functions: colorview and channelview.
Example
Convert array of UInt8 to array of RGB:
arr = rand(UInt8, 3, 50, 50)
img = colorview(RGB, arr / 255)
Convert back to channel view:
channelview(img)
Notes
In this example the RGB color type requires that the entries of the array live in [0,1] as floating point. I manually converted UInt8 to Float64 using an explicit division by 255. There is probably a more generic way of achieving this result with reinterpret or some other function in Images.jl
The colorview and channelview functions assume that the channel dimension is the first dimension of the array. You can use permutedims in case your channels live in a different dimension, or use some function in Images.jl (maybe reinterpretc?) to do it efficiently without memory copies.
Related
I am using doxygen + Sphinx to generate documentation for some Python bindings I have written.
The Python bindings are written using pybind11.
When I write my documentation string for a non overloaded function, it formats properly.
Here is an example:
// Pybind11 python bindings.
// Module and class defined above...
.def("get_similarity", [](SDK &sdk, const Faceprint& faceprint1, const Faceprint& faceprint2) {
float similarity;
float probability;
ErrorCode status = sdk.getSimilarity(faceprint1, faceprint2, probability, similarity);
return std::make_tuple(status, probability, similarity);
},
R"mydelimiter(
Compute the similarity of the given feature vectors.
:param feature_vector_1: the first Faceprint to be compared.
:param feature_vector_2: the second Faceprint to be compared.
:return: The see :class:`ERRORCODE`, match probability and similairty score, in that order. The match probability is the probability that the two faces feature vectors are a match, while the similairty is the computed similairty score.
)mydelimiter",
py::arg("feature_vector_1"), py::arg("feature_vector_2"))
This is what it looks like:
When I write documentation for an overloaded function, the formatting is off. Here is an example:
.def("set_image", [](SDK &sdk, py::array_t<uint8_t> buffer, uint16_t width, uint16_t height, ColorCode code) {
py::buffer_info info = buffer.request();
ErrorCode status =sdk.setImage(static_cast<uint8_t*>(info.ptr), width, height, code);
return status;
},
R"mydelimiter(
Load an image from the given pixel array in memory.
Note, it is highly encouraged to check the return value from setImage before proceeding.
If the license is invalid, the ``INVALID_LICENSE`` error will be returned.
:param pixel_array: decoded pixel array.
:param width: the image width in pixels.
:param height: the image height in pixels.
:param color_code: pixel array color code, see :class:`COLORCODE`
:return: Error code, see :class:`ERRORCODE`
)mydelimiter",
py::arg("pixel_array"), py::arg("width"), py::arg("height"), py::arg("color_code"))
// Other overrides of set_image below...
The formatting is all off for this, in particular the way the Parameters and Returns are displayed. This is what it looks like.
How can I get the set_image docs to look like the get_similarity docs?
I'm not sure how to properly solve the problem, but here is a hack I used to make them appear to be the same. Basically, I hard coded the formatting:
R"mydelimiter(
Load an image from the given pixel array in memory.
Note, it is highly encouraged to check the return value from setImage before proceeding.
If the license is invalid, the ``INVALID_LICENSE`` error will be returned.
:Parameters:
- **pixel_array** - decoded pixel array.
- **width** - the image width in pixels.
- **height** - the image height in pixels.
- **color_code** - pixel array color code, see :class:`COLORCODE`
:Returns:
Error code, see :class:`ERRORCODE`
)mydelimiter"
I have the following code to import multiple images from one directory into a struct in Matlab, here is an example of the images.
myPath= 'E:\conduit_stl(smooth contour)\Collagen Contour Slices\'; %'
fileNames = dir(fullfile(myPath, '*.tif'));
C = cell(length(fileNames), 1);
for k = 1:length(fileNames)
filename = fileNames(k).name;
C{k} = imread(filename);
se = strel('disk', 2, 0);
C = imclose(C, se);
filled = imfill(C,'holes');
end
Though now I would like to perform a fill on all the images, later finding the centroids. However, when attempting this, an error stating: "Expected input number 1, I1 or BW1, to be one of these types: double, ... etc" I tried converting the images into double precision, though that just resulted in: "Conversion to double from cell is not possible."
This is most likely due to the structure type, the images are 'housed' in, but I have no idea concerning that.
Help on this would be greatly appreciated.
So to elaborate on my previous comments, here are a few things to change with your code:
C is not a structure but a cell array. The content of a cell array is access with {curly brackets}. If all your images are the same size, then it is more efficient to store them into a numeric array instead of a cell array. Since they seem to be logical images, your array would have 3 dimensions:
[height, width, numberofimages]
You could therefore start your code with:
myPath= 'E:\conduit_stl(smooth contour)\Collagen Contour Slices\'; %'
fileNames = dir(fullfile(myPath, '*.tif'));
%// if your images are of type uint8
C(height,width,length(fileNames)) = uint8(0);
C_filled = C; %// initialize new array to stored filled images
Also, since you are using the same structuring elements for your morphological operation on all the images, you can define it once outside the loop.
So your code could look like this:
se = strel('disk', 2, 0);
for k = 1:length(fileNames)
C(:,:,k) = imread(fileNames(k).name);
C_filled(:,:,k) = imfill(imclose(C(:,:,k), se),'holes');
end
I have a image.mat of about 4MB.
The size of some image file can also be 4MB.
Can the image.mat be transferred to image file?
I tried this, but that doesn't do the trick:
load image.mat %load Iw
imshow(mat2gray(Iw))
imwrite(Iw,'image.png');
IwNew = imread('image.png');
isequal(Iw,IwNew)
The result is 0; am I misunderstanding something?
The number in Iw are very important, so Iw can not be changed.
Actually my real problem is how to store float numbers into an image?
But MATLAB does not support Tiff 6.0, so I'll have to find some workaround.
I am doing a blind watermarking,and the decimal fraction of a number in Iw is important because it involve the information about another image.So the Iw can not be changed.
Actually,Mathematica can store floating floating-point data:
But my programs are all in MATLAB.
According to Matlab documentation:
"If A is a grayscale or RGB color image of data type double or single, then imwrite assumes that the dynamic range is [0,1] and automatically scales the data by 255 before writing it to the file as 8-bit values."
In other words: imwrite performs automatic conversion from double to uint8.
if you wish to keep the values of Iw unchanged, save it as a mat file and not as an image.
If you do want to save it as an image - there is going to be some loss of information. In this case, there are two things which need to be done:
Change the dynamic range of the matrix to [0,1]. (in your case, the range is between -0.0035 to 255.0035. Also, the matrix contain inf values).
If you want to get an equality, scale IwNew by 255, and convert it to uint8.
Code:
load image.mat %load Iw
%step 1, change the dynamic range of the image to [0,1].
%One way to do it is by using mat2gray on each channel separately.
Iw(:,:,1) = mat2gray(Iw(:,:,1));
Iw(:,:,2) = mat2gray(Iw(:,:,2));
Iw(:,:,3) = mat2gray(Iw(:,:,3));
%write the image to file
imwrite(Iw,'image.png');
%read the image
IwNew=imread('image.png');
%scale it, and convert to uint 8
Iw2 = uint8(Iw*255);
%check equality
isequal(Iw2,IwNew)
Result:
ans =
1
Alternatively, if you want to convert IwNew to double, perform the following:
%conversion to double
Iw2 = double(IwNew)/255;
Notice that in this case, the matrices won't be equal to one another,
Due to the loss of information which happened during the imwrite process (conversion from double to uint8).
Instead, they will be epsilon-close to one another, where epsilon = 0.0001.
In order to test this, write the following:
%equality check
sum(abs(Iw2(:)-Iw(:))>0.0001)
Result:
ans =
0
My MATLAB (R2010a) with the image processing toolbox is perfectly capable of storing double-valued pixel values, and retrieve them without loss of data.
Here's a shameless copy of this answer:
% Some random, data of type double
A = 7.6*rand(10);
% Construct TIFF image...
t = Tiff('test.tif', 'w');
% ...with these custom parameters...
tagstruct = struct(...
'ImageLength' , size(A,1),...
'ImageWidth' , size(A,2),...
'Compression' , Tiff.Compression.None,...
'SampleFormat' , Tiff.SampleFormat.IEEEFP,... % floating point
'Photometric' , Tiff.Photometric.MinIsBlack,...
'BitsPerSample' , 64,... % 8 bytes / double
'SamplesPerPixel' , 1,...
'PlanarConfiguration', Tiff.PlanarConfiguration.Chunky);
t.setTag(tagstruct);
% ...and write it to disk.
t.write(A);
t.close();
% Read the data actually written, and check if all
% information was indeed preserved:
B = imread('test.tif');
isequal(A,B)
Result:
ans =
1
Adjust in obvious ways if you have more than 1 channel (RGB).
I read the image with:
W=double(imread('rose32.bmp'));
Then:
imshow(W,[]);
or
imshow(W);
But the shown image seems to be inverted with respect to the original image. How can I solve this problem ? Is it a MATLAB problem?
The problem is probably caused by the formatting the the imagefile!
When you use imread what it returns depends of the formatting of the image in the image file. imread returns tree values [A,map,transparency] = imread(___), where A might be hxw-matrix or a hxwx3-matrix (h and w are short for height and width) of several different possible classes (eg. double or uint8).
In the case of the hxwx3-matrix the output-variable map will be empty, and you can show the image directly using imshow(A). This is called an RGB-image.
The other possibility (called an indexed image) is the hxw-matrix. In this case map is a colormap, and you can show the image by imshow(A,map).
You can easily convert between these two types of images by ind2rgb(A,map) and rgb2ind(A).
The other thing you need to be careful with is the class of the image.
If you have an rgb-image of class uint8, then the values of image will be integers between 0 and 255, whereas rgb-images of type double have values between 0 and 1. You should never convert an image to double-class by the double-function like you do; in stead use im2double.
So to solve your problem try the following code:
[img,map] = imread('rose32.bmp');
if ~isempty(map)
img = ind2rgb(img,map);
end
img = im2double(img);
Now imshow(img) should show the image correctly. Or you can simply use the following code:
[W,map] = imread('rose32.bmp');
imshow(W,map);
I am trying to overlay an activation map over a baseline vasculature image but I keep getting the same error below:
X and Y must have the same size and class or Y must be a scalar double.
I resized each to 400x400 so I thought it would work but no dice. Is there something I am missing? It is fairly straight forward for a GUI I am working on. Any help would be appreciated.
a=imread ('Vasculature.tif');
b = imresize (a, [400,400]);
c=imread ('activation.tif');
d= imresize (c, [400,400]);
e=imadd (b,d);
Could it be the bit depth or dpi?
I think one of your images is RGB (size(...,3)==3) and the other is grayscale (size(...,3)==1). Say the vasculature image a is grayscale and the activation image c is RGB. To convert a to RGB to match c, use ind2rgb, then add.
aRGB = ind2rgb(a,gray(256)); % assuming uint8
Alternatively, you could do aRGB = repmat(a,[1 1 3]);.
Or to put the activation image into grayscale:
cGray = rgb2gray(c);
Also, according to the documentation for imadd the two images must be:
nonsparse numeric arrays with the same size and class
To get the uint8 and uint16 images to match use the im2uint8 or im2uint16 functions to convert. Alternatively, just rescale and cast (e.g. b_uint8 = uint8(double(b)*255/65535);).
Note that in some versions of MATLAB there is a bug with displaying 16-bit images. The fix depends on whether the image is RGB or gray scale, and the platform (Windows vs. Linux). If you run into problems displaying 16-bit images, use imshow, which has the fix, or use the following code for integer data type images following image or imagesc:
function fixint16disp(img)
if any(strcmp(class(img),{'int16','uint16'}))
if size(img,3)==1,
colormap(gray(65535)); end
if ispc,
set(gcf,'Renderer','zbuffer'); end
end
chappjc's answers is just fine, I want to add a more general answer to the question how to solve the error message
X and Y must have the same size and class or Y must be a scalar double
General solving strategy
At which line does the error occur
Try to understand the error message
a. "... must have the same size ...":
Check the sizes of the input.
Try to understand the meaning of your code for the given (type of) input parameters. Is the error message reasonable?
What do you want to achieve?
Useful command: size A: returns the size of A
b. "... must have the same class ...":
Check the data types of the input arguments.
Which common data type is reasonable?
Convert it to the chosen data type.
Usefull command: whos A: returns all the meta information of A, i.e. size, data type, ...
Implement the solution: your favorite search engine and the matlab documentation are your best friend.
Be happy: you solved your problem and learned something new.
A simple code :
a=imread ('image1.jpg');
b=imresize (a, [400,400]);
subplot(3,1,1), imshow(b), title('image 1');
c=imread ('image2.jpg');
d= imresize (c, [400,400]);
subplot(3,1,2), imshow(d), title('image 2');
[x1, y1] = size(b) %height and wedth of 1st image
[x2, y2] = size(d) %height and wedth of 2nd image
for i = 1: x1
for j = 1: y1
im3(i, j)= b(i, j)+d(i, j);
end
end
subplot(3,1,3), imshow (im3), title('Resultant Image');