Moving fill background to the bottom after saving as jpg - matlab-figure

When I use fill or viscircles functions to plot circles with background to the plot, in figure it appears on the top of the plot, as it was intended, but after saving as jpg or png, the background moves to the bottom of the plot and is not visible anymore.
How do I fix this?
note: it's not because white is transparent color. I tried gray, I tried red, both are behaving the same as white.

Please consider the following example (made on MATLAB 2015a):
figure(); h=cell(1);
%% viscircles method:
subplot(1,2,1);
plot([0 1],[0,1]); set(gca,'Color',0.8*[1 1 1]); axis square;
h{1} = viscircles([0.5,0.5], 0.1,'EdgeColor','k','DrawBackgroundCircle',false);
get(h{1},'Children')
%// Line with properties:
%//
%// Color: [0 0 0]
%// LineStyle: '-'
%// LineWidth: 2
%// Marker: 'none'
%// MarkerSize: 6
%// MarkerFaceColor: 'none'
%// XData: [1x182 double]
%// YData: [1x182 double]
%// ZData: [1x0 double]
%% Annotation method:
subplot(1,2,2);
plot([0 1],[0,1]); set(gca,'Color',0.8*[1 1 1]); axis square;
pos = get(gca,'position');
h{2} = annotation('ellipse',[pos(1)+pos(3)*0.5 pos(2)+pos(4)*0.5 0.1 0.1],...
'FaceColor','w')
%// Ellipse with properties:
%//
%// Color: [0 0 0]
%// FaceColor: [1 1 1]
%// LineStyle: '-'
%// LineWidth: 0.5000
%// Position: [0.7377 0.5175 0.1000 0.1000]
%// Units: 'normalized'
When saving the output to .jpg I get the following result (which is identical to the preview within the figure):
Notice the FaceColor property in the 2nd code cell, which is absent in the 1st object. The problem seems to be that the viscircles function you used is not supposed to draw a shape with a background, rather, it draws a line. It is unclear to me why you see the preview the way you do (i.e. with a background).
Sidenote: the 'DrawBackgroundCircle' option for viscircles merely draws some white background for the outline.
You should try some alternate ways to plot filled circles, such as:
Using annotation objects as in my example above. Note that these require you to provide coordinates in figure units (and not axes'!).
Using filled polygons:
N=20; c=[0.5,0.5]; r=0.1; color = [1 1 1];
t = 2*pi/N*(1:N);
hold all; fill(c(1)+r*cos(t), c(2)+r*sin(t), color);
[ snippet is based on a comment of this submission ]
Plotting with giant markers:
plot(0.5,0.5,'.r','MarkerSize',70);
or
scatter(0.5,0.5,70,'r','filled');
[ snippet is based on this answer ]. As mentioned in the link, the downside here is that marker size doesn't change with zoom.
As a very last resort, you could save the image in some vectoric format (such as .emf), edit it (with something like InkScape), bring the background circles to a higher layer and export it.

try:
print('-painters','-dpng','myFile')
or
print('-painters','-dsvg','myFile')
Logic: from documentation appears that the model of rendering is changed from painter (what you intended) to OpenGL when saving the file. I suspect that OpenGL does not respect you intentions and forcing the renderer to painter will solve the problem. But it is not clear if
painter rendering can be used for bitmap images and not only vector images, so first example can fail.

Related

How do I save just figure image without any graduation and numbers on x, y axis?

I used matlab code.
img = imread('cmap3.png')
map = jet(256)
ind = rgb2ind(img,map)
colormap(map)
cm = colormap('gray)
image(ind)
Through above code, I got the .
I want to save just the gray scale image without any graduations and numbers on x,y axis.
How do I remove them and save gray scale image?
If you use imwrite, you won't save the axes' labels.
For actual plots, there exists a different solutions, eg. described here: set the axis to start at the very left bottom corner so that there is no space left for descriptions: set(gca, 'Position',[0 0 1 1]). Than you can even use print to save the image/figure.

Index exceeds matrix dimensions when trying to access position vectors

I am simply reading an image and wish to visualize the bounding boxes returned by blob analysis of matlab which returns position vectors.
Here is my code
img = imread(file_name);
img = im2bw(img);
gblob = vision.BlobAnalysis('AreaOutputPort', true, ... % Set blob analysis handling
'CentroidOutputPort', true, ...
'BoundingBoxOutputPort', true', ...
'MinimumBlobArea', 0, ...
'MaximumBlobArea', 600000, ...
'MaximumCount', 1000, ...
'MajorAxisLengthOutputPort',true, ...
'MinorAxisLengthOutputPort',true, ...
'OrientationOutputPort',true);
[Area,centroid, bbox, MajorAxis, MinorAxis,Orientation] = step(gblob, img);
% each bbox is position vector of the form [x y width height]
for i = 1:1:length(MajorAxis)
figure;imshow(img(bbox(i,2):bbox(i,2) + bbox(i,4),bbox(i,1):bbox(i,1)+bbox(i,3)));
end
On doing so i get an error Index exceeds matrix dimensions. I have also tried
figure;imshow(img(bbox(i,1):bbox(i,1) + bbox(i,3),bbox(i,2):bbox(i,2)+bbox(i,4)));
but i still end up getting the same error.
here is a sample image where this code gives an error
It's a simple case of mis-indexing. The blob detector returns the x and y coordinates of the top left corner of the blob. x in this case are the horizontal coordinates while y is the vertical. Therefore, you simply need to swap how you're accessing the image as the vertical needs to come first, then horizontal after.
Also with regards to your image, I would invert the image because the object would be considered as a dark object on white background once you convert it to binary. The blob detector works by detecting white objects on black background. Therefore, invert the image and do some morphological closing to clean up the noise once that happens:
img = imread('http://www.aagga.com/wp-content/uploads/2016/02/Sample.jpg');
img = ~im2bw(img); %// Change - invert
img_clean = imclose(img, strel('square', 7)); %// Change, clean the image
I get this image now:
imshow(img_clean);
Not bad.... now run your actual blob detector. Take note that the image you put inside the blob detector is the run variable name. You'll need to call it img_clean now:
gblob = vision.BlobAnalysis('AreaOutputPort', true, ...
'CentroidOutputPort', true, ...
'BoundingBoxOutputPort', true', ...
'MinimumBlobArea', 0, ...
'MaximumBlobArea', 600000, ...
'MaximumCount', 1000, ...
'MajorAxisLengthOutputPort',true, ...
'MinorAxisLengthOutputPort',true, ...
'OrientationOutputPort',true);
[Area,centroid, bbox, MajorAxis, MinorAxis,Orientation] = step(gblob, img_clean);
Now finally extract out each blob:
% each bbox is position vector of the form [x y width height]
for i = 1:1:length(MajorAxis)
figure;
imshow(img_clean(bbox(i,2):bbox(i,2) + bbox(i,4),bbox(i,1):bbox(i,1)+bbox(i,3))); %// Change
end
I now get the following 9 figures:
Take note that the above isn't perfect because the perimeter of the sign is disconnected so the blob detector will interpret this as different blobs. One way you could combat this is to vary the threshold of the image, or perhaps use graythresh to perform adaptive thresholding so you can ensure that the border is connected properly. However, this is a good way to start tackling your problem.
Minor Note
A much easier way to do this is to do away with the Computer Vision Toolbox and use the Image Processing Toolbox. Specifically use regionprops and use the Image property to extract out the actual images that the blobs contain themselves:
%// Code from before
img = imread('http://www.aagga.com/wp-content/uploads/2016/02/Sample.jpg');
img = ~im2bw(img); %// Change - invert
img_clean = imclose(img, strel('square', 7)); %// Change, clean the image
%// Create regionprops structure with desired properties
out = regionprops(img_clean, 'BoundingBox', 'Image');
%// Cycle through each blob and show the image
for ii = 1 : numel(out)
figure;
imshow(out(ii).Image);
end
This should achieve the same thing as I showed you above. You can look at the documentation for more details on what kinds of properties regionprops returns for you per blob.

To convert only black color to white in Matlab

I know this thread about converting black color to white and white to black simultaneously.
I would like to convert only black to white.
I know this thread about doing this what I am asking but I do not understand what goes wrong.
Picture
Code
rgbImage = imread('ecg.png');
grayImage = rgb2gray(rgbImage); % for non-indexed images
level = graythresh(grayImage); % threshold for converting image to binary,
binaryImage = im2bw(grayImage, level);
% Extract the individual red, green, and blue color channels.
redChannel = rgbImage(:, :, 1);
greenChannel = rgbImage(:, :, 2);
blueChannel = rgbImage(:, :, 3);
% Make the black parts pure red.
redChannel(~binaryImage) = 255;
greenChannel(~binaryImage) = 0;
blueChannel(~binaryImage) = 0;
% Now recombine to form the output image.
rgbImageOut = cat(3, redChannel, greenChannel, blueChannel);
imshow(rgbImageOut);
Which gives
Where seems to be something wrong in red color channel.
The Black color is just (0,0,0) in RGB so its removal should mean to turn every (0,0,0) pixel to white (255,255,255).
Doing this idea with
redChannel(~binaryImage) = 255;
greenChannel(~binaryImage) = 255;
blueChannel(~binaryImage) = 255;
Gives
So I must have misunderstood something in Matlab. The blue color should not have any black. So this last image is strange.
How can you turn only black color to white?
I want to keep the blue color of the ECG.
If I understand you properly, you want to extract out the blue ECG plot while removing the text and axes. The best way to do that would be to examine the HSV colour space of the image. The HSV colour space is great for discerning colours just like the way humans do. We can clearly see that there are two distinct colours in the image.
We can convert the image to HSV using rgb2hsv and we can examine the components separately. The hue component represents the dominant colour of the pixel, the saturation denotes the purity or how much white light there is in the pixel and the value represents the intensity or strength of the pixel.
Try visualizing each channel doing:
im = imread('http://i.stack.imgur.com/cFOSp.png'); %// Read in your image
hsv = rgb2hsv(im);
figure;
subplot(1,3,1); imshow(hsv(:,:,1)); title('Hue');
subplot(1,3,2); imshow(hsv(:,:,2)); title('Saturation');
subplot(1,3,3); imshow(hsv(:,:,3)); title('Value');
Hmm... well the hue and saturation don't help us at all. It's telling us the dominant colour and saturation are the same... but what sets them apart is the value. If you take a look at the image on the right, we can tell them apart by the strength of the colour itself. So what it's telling us is that the "black" pixels are actually blue but with almost no strength associated to it.
We can actually use this to our advantage. Any pixels whose values are above a certain value are the values we want to keep.
Try setting a threshold... something like 0.75. MATLAB's dynamic range of the HSV values are from [0-1], so:
mask = hsv(:,:,3) > 0.75;
When we threshold the value component, this is what we get:
There's obviously a bit of quantization noise... especially around the axes and font. What I'm going to do next is perform a morphological erosion so that I can eliminate the quantization noise that's around each of the numbers and the axes. I'm going to make it the mask a bit large to ensure that I remove this noise. Using the image processing toolbox:
se = strel('square', 5);
mask_erode = imerode(mask, se);
We get this:
Great, so what I'm going to do now is make a copy of your original image, then set any pixel that is black from the mask I derived (above) to white in the final image. All of the other pixels should remain intact. This way, we can remove any text and the axes seen in your image:
im_final = im;
mask_final = repmat(mask_erode, [1 1 3]);
im_final(~mask_final) = 255;
I need to replicate the mask in the third dimension because this is a colour image and I need to set each channel to 255 simultaneously in the same spatial locations.
When I do that, this is what I get:
Now you'll notice that there are gaps in the graph.... which is to be expected due to quantization noise. We can do something further by converting this image to grayscale and thresholding the image, then filling joining the edges together by a morphological dilation. This is safe because we have already eliminated the axies and text. We can then use this as a mask to index into the original image to obtain our final graph.
Something like this:
im2 = rgb2gray(im_final);
thresh = im2 < 200;
se = strel('line', 10, 90);
im_dilate = imdilate(thresh, se);
mask2 = repmat(im_dilate, [1 1 3]);
im_final_final = 255*ones(size(im), class(im));
im_final_final(mask2) = im(mask2);
I threshold the previous image that we got without the text and axes after I convert it to grayscale, and then I perform dilation with a line structuring element that is 90 degrees in order to connect those lines that were originally disconnected. This thresholded image will contain the pixels that we ultimately need to sample from the original image so that we can get the graph data we need.
I then take this mask, replicate it, make a completely white image and then sample from the original image and place the locations we want from the original image in the white image.
This is our final image:
Very nice! I had to do all of that image processing because your image basically has quantization noise to begin with, so it's going to be a bit harder to get the graph entirely. Ander Biguri in his answer explained in more detail about colour quantization noise so certainly check out his post for more details.
However, as a qualitative measure, we can subtract this image from the original image and see what is remaining:
imshow(rgb2gray(abs(double(im) - double(im_final_final))));
We get:
So it looks like the axes and text are removed fine, but there are some traces in the graph that we didn't capture from the original image and that makes sense. It all has to do with the proper thresholds you want to select in order to get the graph data. There are some trouble spots near the beginning of the graph, and that's probably due to the morphological processing that I did. This image you provided is quite tricky with the quantization noise, so it's going to be very difficult to get a perfect result. Also, these thresholds unfortunately are all heuristic, so play around with the thresholds until you get something that agrees with you.
Good luck!
What's the problem?
You want to detect all black parts of the image, but they are not really black
Example:
Your idea (or your code):
You first binarize the image, selecting the pixels that ARE something against the pixels that are not. In short, you do: if pixel>level; pixel is something
Therefore there is a small misconception you have here! when you write
% Make the black parts pure red.
it should read
% Make every pixel that is something (not background) pure red.
Therefore, when you do
redChannel(~binaryImage) = 255;
greenChannel(~binaryImage) = 255;
blueChannel(~binaryImage) = 255;
You are doing
% Make every pixel that is something (not background) white
% (or what it is the same in this case, delete them).
Therefore what you should get is a completely white image. The image is not completely white because there has been some pixels that were labelled as "not something, part of the background" by the value of level, in case of your image around 0.6.
A solution that one could think of is manually setting the level to 0.05 or similar, so only black pixels will be selected in the gray to binary threholding. But this will not work 100%, as you can see, the numbers have some very "no-black" values.
How would I try to solve the problem:
I would try to find the colour you want, extract just that colour from the image, and then delete outliers.
Extract blue using HSV (I believe I answered you somewhere else how to use HSV).
rgbImage = imread('ecg.png');
hsvImage=rgb2hsv(rgbImage);
I=rgbImage;
R=I(:,:,1);
G=I(:,:,2);
B=I(:,:,3);
th=0.1;
R((hsvImage(:,:,1)>(280/360))|(hsvImage(:,:,1)<(200/360)))=255;
G((hsvImage(:,:,1)>(280/360))|(hsvImage(:,:,1)<(200/360)))=255;
B((hsvImage(:,:,1)>(280/360))|(hsvImage(:,:,1)<(200/360)))=255;
I2= cat(3, R, G, B);
imshow(I2)
Once here we would like to get the biggest blue part, and that would be our signal. Therefore the best approach seems to first binarize the image taking all blue pixels
% Binarize image, getting all the pixels that are "blue"
bw=im2bw(rgb2gray(I2),0.9999);
And then using bwlabel, label all the independent pixel "islands".
% Label each "blob"
lbl=bwlabel(~bw);
The label most repeated will be the signal. So we find it and separate the background from the signal using that label.
% Find the blob with the highes amount of data. That will be your signal.
r=histc(lbl(:),1:max(lbl(:)));
[~,idxmax]=max(r);
% Profit!
signal=rgbImage;
signal(repmat((lbl~=idxmax),[1 1 3]))=255;
background=rgbImage;
background(repmat((lbl==idxmax),[1 1 3]))=255;
Here there is a plot with the signal, background and difference (using the same equation as #rayryang used)
Here is a variation on #rayryeng's solution to extract the blue signal:
%// retrieve picture
imgRGB = imread('http://i.stack.imgur.com/cFOSp.png');
%// detect axis lines and labels
imgHSV = rgb2hsv(imgRGB);
BW = (imgHSV(:,:,3) < 1);
BW = imclose(imclose(BW, strel('line',40,0)), strel('line',10,90));
%// clear those masked pixels by setting them to background white color
imgRGB2 = imgRGB;
imgRGB2(repmat(BW,[1 1 3])) = 255;
%// show extracted signal
imshow(imgRGB2)
To get a better view, here is the detected mask overlayed on top of the original image (I'm using imoverlay function from the File Exchange):
figure
imshow(imoverlay(imgRGB, BW, uint8([255,0,0])))
Here is a code for this:
rgbImage = imread('ecg.png');
redChannel = rgbImage(:, :, 1);
greenChannel = rgbImage(:, :, 2);
blueChannel = rgbImage(:, :, 3);
black = ~redChannel&~greenChannel&~blueChannel;
redChannel(black) = 255;
greenChannel(black) = 255;
blueChannel(black) = 255;
rgbImageOut = cat(3, redChannel, greenChannel, blueChannel);
imshow(rgbImageOut);
black is the area containing the black pixels. These pixels are set to white in each color channel.
In your code you use a threshold and a grayscale image so of course you have much bigger area of pixels that is set to white resp. red color. In this code only pixel that contain absolutly no red, green and blue are set to white.
The following code does the same with a threshold for each color channel:
rgbImage = imread('ecg.png');
redChannel = rgbImage(:, :, 1);
greenChannel = rgbImage(:, :, 2);
blueChannel = rgbImage(:, :, 3);
black = (redChannel<150)&(greenChannel<150)&(blueChannel<150);
redChannel(black) = 255;
greenChannel(black) = 255;
blueChannel(black) = 255;
rgbImageOut = cat(3, redChannel, greenChannel, blueChannel);
imshow(rgbImageOut);

How can I "plot" an image on top of another image with a different colormap?

I've got two images, one 100x100 that I want to plot in grayscale and one 20x20 that I want to plot using another colormap. The latter should be superimposed on the former.
This is my current attempt:
A = randn(100);
B = ones(20);
imagesc(A);
colormap(gray);
hold on;
imagesc(B);
colormap(jet);
There are a couple of problems with this:
I can't change the offset of the smaller image. (They always share the upper-left pixel.)
They have the same colormap. (The second colormap changes the color of all pixels.)
The pixel values are normalised over the composite image, so that the first image changes if the second image introduces new extreme values. The scalings for the two images should be separate.
How can I fix this?
I want an effect similar to this, except that my coloured overlay is rectangular and not wibbly:
Just change it so that you pass in a full and proper color matrix for A (i.e. 100x100x3 matrix), rather than letting it decide:
A = rand(100); % Using rand not randn because image doesn't like numbers > 1
A = repmat(A, [1, 1, 3]);
B = rand(20); % Changed to rand to illustrate effect of colormap
imagesc(A);
hold on;
Bimg = imagesc(B);
colormap jet;
To set the position of B's image within its parent axes, you can use its XData and YData properties, which are both set to [1 20] when this code has completed. The first number specifies the coordinate of the leftmost/uppermost point in the image, and the second number the coordinate of the rightmost/lowest point in the image. It will stretch the image if it doesn't match the original size.
Example:
xpos = get(Bimg, 'XData');
xpos = xpos + 20; % shift right a bit
set(Bimg, 'XData', xpos);

imagesc() in Matlab not showing equal axis

I use the following lines of code to plot the images:
for t=1:subplotCol
subplot(subplotRow,subplotCol,t)
imagesc([1 100],[1 100],c(:,:,nStep*t));
colorbar
xlabel('X-axis')
ylabel('Y-axis')
title(['Concentration profile at t_{',num2str(nStep*t),'}'])
subplot(subplotRow,subplotCol,subplotCol+t)
hold on;
plot(distance,gamma(:,1,t),'-ob');
plot(distance,gamma(:,2,t),'-or');
plot(distance,gamma(:,3,t),'-og');
xlabel('h');ylabel('\gamma (h)');
legend(['\Theta = ',num2str(theta(1))],...
['\Theta = ',num2str(theta(2))],['\Theta = ',num2str(theta(3))]);
end
I get the following subplot with images:
As you can see the images in first row are now scaled equally on X and Y axis (Y axis is longer than the X-axis) even though the size of the image matrix is 100x100 for each image on first row.
Can someone help with how to make images in first row look like squares than rectangles which I get currently. Thanks.
Use the dataAspectRatio properties of the axes, and set it to [1 1 1]
%# create a test image
imagesc(1:10,1:10,rand(10))
%# you should use the handle returned by subplot
%# instead of gca
set(gca,'dataAspectRatio',[1 1 1])
Another method is to use the command axis equal
Regarding the size of different objects, you can set the exact position of each axes on the figure, for example for the first one use:
subplot(2,2,1); set(gca,'Position',[0.05 0.05 0.4 0.4])

Resources