I have combined two images on matlab (3D and binary). I imported both using niftiread and then after I combined both I write them using niftiwrite. However the orientation seems to be wrong for the newly created image. Has anyone encountered this beforehand?
I tried permute, rot, and flip but it did not seem to solve this problem.
The issue is that niftiread only loads the image itself and not the associated metadata that specifies the image orientation (among other things). If you then use niftiwrite without specifying this information, you get default header values.
If the original images on your disk are "3D.nii" and "mask.nii", you would want to do something like:
threeD_info = niftiinfo('3D.nii'); % load metadata for 3D image
threeD_data = niftiread(threeD_info); % load 3D image by specifying info
mask_data = niftiread('mask.nii'); % load mask image by specifying filename
output_data = threeD_data .* mask_data; % multiply images (or other operation of your choice)
niftiwrite(output_data,'3Dmasked.nii',threeD_info); % write output image to 3Dmasked.nii including metadata
Note: Depending on what type of "combination" you are performing, you might need to update some of the fields in threeD_info accordingly, such as the datatype.
Related
I want to extract images from PDFs retaining a knowledge of their content (page_number and coordinates on page). (Some tools (e.g. pdfminer) only emit image files with non-semantic names, e.g. Img0.bmp). I can do this with PDFBox (Java) but I'd ideally like a Python tool
My current (arbitrary) designs is to create filenames of the form:
image_<page>_<serial_in_page>_<x1>_<x2>__<y1>_<y2>.png
Currently pdfplumber exposes cooordinates but with a PDFStream and encoding information rather than an image. Code to convert the stream to a *.png would solve the problem.
(NOTE: the pdfplumber approach of rendering to the screen and capturing the known rectangle (which I use) is not a solution as the image is often degraded and frequently overwritten with text.)
(NOTE: I have had problems with several Python tools (pdfminer.six, PuMuPDF) extracting images as they make the background black which obscures black text, etc. PDFBox (Java) doesn't have this problem.)
Python tools are likely to have similar problems to any tools even those that require a single line to manipulate images or extract their details.
Here we can see a visual layout of all the compressed images in the file by using one command line to extract images. Here the individual object references have been converted into normal tiff or jpg (other tools may use pbm and pgm especially for OCR but the result is generally similar). The Greyscale Alpha softmask (B&W) transparency components are not necessarily tied direct to a page or an image other than by internal references, and usually appear like negatives.
What you may note is that the objects that were inserted most likely as one PNG are broken in two when injected into the PDF and their scaled placement is defined. Note that a raw PNG (whatever its source common resolution was) will retain number of dots but its scale when inserted into the PDF could be totally different horizontal and vertical, thus the only meaningful data is W x H in pixel values.
It is not trivial to overlay the mask on the RGB component when simply extracted but can allow for colour changes if desired.
So PDFbox is one of the simpler/better tools for blending to a suitable output, (as you have discovered) but for Python it is generally the top end library products that can identify the placement of the two images and combine into a suitable alpha output like a new PNG.
For many suggestions see Extract images from PDF without resampling, in python?.
Your related part question was knowing where those components are placed on each page since one image (and its alpha mask) could be placed multiple times such as a heading logo on each page. Again it is easy in a single command line to see which pages are referenced by a group of images, but to see which image is placed where requires analyzing each pages resources, again requiring a library interrogation of page contents, thus best done via power house libraries such as iText or any other like PDFtron for python.
For a related command in PyMuPDF see https://pymupdf.readthedocs.io/en/latest/page.html#Page.get_image_rects
I don't have a solution in Python but here is a small script using Ruby and HexaPDF:
require 'hexapdf'
class ImageBorderProcessor < HexaPDF::Content::Processor
def initialize(page, index)
super()
#page = page
#index = index
#count = 0
end
def paint_xobject(name)
super
xobject = resources.xobject(name)
return unless xobject[:Subtype] == :Image
w, h = xobject.width, xobject.height
llx, lly = graphics_state.ctm.evaluate(0, 0)
lrx, lry = graphics_state.ctm.evaluate(1, 0)
urx, ury = graphics_state.ctm.evaluate(1, 1)
ulx, uly = graphics_state.ctm.evaluate(0, 1)
# If the image is rotated, you will need all 4 coordinates, nut just the 2
filename = "image_#{#index}_#{#count}_#{llx}_#{urx}_#{lly}_#{ury}"
xobject.write(filename) rescue puts "Can write image #{#index}-#{#count}"
#count += 1
end
end
doc = HexaPDF::Document.open(ARGV[0])
doc.pages.each_with_index do |page, index|
processor = ImageBorderProcessor.new(page, index)
page.process_contents(processor)
end
It will iterate over all pages of the input document provided on the command line and create files using your file naming scheme. Since HexaPDF doesn't currently support writing all types of PDF images, you might get some error messages for those that can't be written.
If a supported image has an associated image mask defined, it will automatically be used to create a transparent image.
The script will output all images found, even repeated ones. This could easily be changed so that just a soft link is created for repeated images.
I already made the image registration, and now I want to apply the registered image in rgb because I need to extract the countourr. As the imregister just work with grayscale images I converted my image to grayscale,but now I canĀ“t find the intensity value of the contour to find the contour indexes. Wat kind of algorithm does imregister applies to the image, tochange the intensity value of the pixels? Or there is another way to go back to the rgb image to extract the inicial countour in the registered image? Does anyone have any sugestion?
There is my matlab code :
% Algorithm for image validation
% Open the two images which will be compared
name2=input('Image name ( automated segmentation) ','s');
img_automated=imread(name2,'png');
figure (1), imshow(img_automated), title('Image automated')
name=input('Image name ( manual segmentation) ','s');
img_manual=imread(name,'png');
img_manual_gray=rgb2gray(img_manual);
%img_manual_gray=img_manual(:,:,3);
figure (2), imshow (img_manual),title('Image manual')
img_automated_gray=rgb2gray(img_automated);
%img_automated_gray=img_automated(:,:,3);
%img_double=im2double(img_automated_gray);
figure (3), imshow (img_automated_gray), title (' Image converted to double ');
%img_automated_gray2=rgb2gray(img_automated);
% View images side by side
figure (6), imshowpair(img_manual,img_automated_eq,'diff')
title('Images overlap')
%% Configure parameters in imreconfig
[optimizer,metric]=imregconfig('Multimodal');
%% Default registration
registered=imregister(img_automated_gray,img_manual_gray,'rigid',optimizer,metric);
%tform = imregtform(img_automated,img_manual,imref2d,'affine',optimizer,metric)
figure(7), imshowpair(registered, img_manual_gray,'falsecolor'); title('Default registration');
figure(8), imshowpair(registered, img_manual_gray,'montage','Scaling','independent'); title('Default registration');
figure(9), imshow(registered);
C = imfuse(registered,img_automated);
figure(21);imshow(C);
%%%%%%%%%%%%%%% Here%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%I tried this process to recover the transformation applied in the registered image,and them aplly this in the initial automated rgb image,but B isn't the same as the registered image. Any suggestions??
tform = imregtform(img_automated_gray,img_manual_gray,'rigid',optimizer,metric);
B = imwarp(img_automated,tform);
figure(22);imshow(B);
Links to the images:
https://www.dropbox.com/s/xbanupnpjaaurj5/manual.png?dl=0 (manual- rgb)
https://www.dropbox.com/s/6fkwi3xbicwzonz/registered%20image.png?dl=0
You are on the right track with your use of imregtform. The missing piece is that imwarp, by default, chooses an output grid with the same resolution as the input grid with limits big enough to capture the entire output image.
In most registration cases, this is not what you want. What you want, is to specify the 'OutputView' Name/Value to specify the resolution and limits that you want in the output grid. In many registration use cases, this is the resolution/limits of the fixed image:
tform = imregtform(img_automated_gray,img_manual_gray,'rigid',optimizer,metric);
B = imwarp(img_automated,tform,'OutputView',imref2d(size(img_manual_gray)));
figure(22);imshow(B);
I am creating a GUI containing an image using the following code:
try
Imagenamehere = imread('Imagenamehere.jpg');
axes(handles.Logo)
image(Imagenamehere)
set(gca,'xtick',[],'ytick',[])
catch
msgbox('Please download all contents from the zipped file into working directory.')
end
The image shows up but for some reason is completely coloured blue as if put through a blue filter. I don't think it would be wise to upload the image but it is a simple logo coloured black and white.
Anyone know what could be causing this?
Check the size, type (probably uint8) and range of your image. It sounds like for some reason your images are being displayed with colormap as jet (the default), and possibly also that your range is not what MATLAB expects (e.g. 0 to 1 not 0 to 255), resulting in all your values being relatively low (blue on the jet colormap).
"black and white" is just one way of interpreting an image file which contains only two colors. MATLAB makes several assumptions when you pass data into a display function like image. If you don't specify colormap and image data range, it will make a guess based off things like data type.
One possibility is that your logo file is an indexed image. In these cases you need to do:
[Imagenamehere map] = imread('Imagenamehere.jpg');
colormap(map);
I'm currently creating my figures in matlab to embed themvia latex into a pdf for later printing. I save the figures and save them via the script export_fig! Now I wonder which is the best way to go:
Which size of the matlab figure window to chose
Which -m option to take for the script? It will change the resolution and the size of the image...
I'm wondering about those points in regards to the following two points:
When chosing the figure-size bigger, there are more tickmarks shown and the single point markers are better visible
When using a small figure and using a big -m option, I still have only some tickmarks
When I generate a image which is quite huge (e.g. resolution 300 and still 2000*2000px) and than embed it into the document: Does this than look ugly? Will this be embedded in a nice scaling mode or is it the same ugliness as if you upload a 1000*1000px image onto a homepage and embed it via the widht and height tags in html -> the browser displays it quite ugly because the browser doesn't do a real resize. So it looks unsharp and ugly.
Thanks in advance!
The MATLAB plots are internally described as vector graphics, and PDF files are also described using vector graphics. Rendering the plot to a raster format is a bad idea, because you end up having to choose resolution and end up with bigger files.
Just save the plot to EPS format, which can be directly embedded into a PDF file using latex. I usually save my MATLAB plots for publication using:
saveas(gcf, 'plot.eps', 'epsc');
and embed them directly into my latex file using:
\includegraphics[width=0.7\linewidth]{plot.eps}
Then, you only need to choose the proportion of the line the image is to take (in this case, 70%).
Edit: IrfanView and others (XnView) don't display EPS very well. You can open them in Adobe Illustrator to get a better preview of what it looks like. I always insert my plots this way and they always look exactly the same in the PDF as in MATLAB.
One bonus you also get with EPS is that you can actually specify a font size so that the text is readable even when you resize the image in the document.
As for the number of ticks, you can look at the axes properties in the MATLAB documentation. In particular, the XTick and YTick properties are very useful manually controlling how many ticks appear no matter what the window resolution is.
Edit (again): If you render the image to a raster format (such as PNG), it is preferable to choose the exact same resolution as the one used in the document. Rendering a large image (by using a big window size) and making it small in the PDF will yield bad results mainly because the size of the text will scale directly with the size of the image. Rendering a small image will obviously make for a very bad effect because of stretching.
That is why you should use a vector image format. However, the default MATLAB settings for figures produce some of the same problems as raster images: text size is not specified as a font size and the number of ticks varies with the window size.
To produce optimal plots in the final render, follow the given steps:
Set the figure's font size to a decent setting (e.g. 11pt)
Render the plot
Decide on number of ticks to get a good effect and set the ticks manually
Render the image to color EPS
In MATLAB code, this should look somewhat like the following:
function [] = nice_figure ( render )
%
% invisible figure, good for batch renders.
f = figure('Visible', 'Off');
% make plots look nice in output PDF.
set(f, ...
'DefaultAxesFontSize', 11, ...
'DefaultAxesLineWidth', 0.7, ...
'DefaultLineLineWidth', 0.8, ...
'DefaultPatchLineWidth', 0.7);
% actual plot to render.
a = axes('Parent', f);
% show whatever it is we need to show.
render(a);
% save file.
saveas(f, 'plot.eps', 'epsc');
% collect garbarge.
close(f);
end
Then, you can draw some fancy plot using:
function [] = some_line_plot ( a )
%
% render data.
x = -3 : 0.001 : +3;
y = expm1(x) - x - x.^2;
plot(a, x, y, 'g:');
title('f(x)=e^x-1-x-x^2');
xlabel('x');
ylabel('f(x)');
% force use of 'n' ticks.
n = 5;
xlimit = get(a, 'XLim');
ylimit = get(a, 'YLim');
xticks = linspace(xlimit(1), xlimit(2), n);
yticks = linspace(ylimit(1), ylimit(2), n);
set(a, 'XTick', xticks);
set(a, 'YTick', yticks);
end
And render the final output using:
nice_figure(#some_line_plot);
With such code, you don't need to worry about the window size at all. Notice that I haven't even showed the window for you to play with its size. Using this code, I always get beautiful output and small EPS and PDF file sizes (much smaller than when using PNG).
The only thing this solution does not address is adding more ticks when the plot is made larger in the latex code, but that can't be done anyways.
I am attempting to use RMagick to convert an SVG to a PNG of a different size.
When I read in the SVG with Magick::Image.read('drawing.svg') and write it out to drawing.png (the equivalent of just running convert drawing.svg drawing.png from the command line), the size is 744x1052.
Let's suppose I want the PNG to be twice as large as it is by default. You can't just read it in, resize it, then write it out, as that first rasterizes the SVG and then scales that image to be twice as large, losing quality and the entire benefit of using a vector graphic in the first place. So instead, if I understand correctly, you're supposed to set the image's density upon read.
image = Magick::Image.read('drawing.svg'){self.density = 144}.first
But image.density still reports the density as "72x72", and if I write out the image it has the same size as before, 744x1052. It doesn't seem to matter how I specify the density upon read. With 144, "144", 144.0, "144.0", "144x144", and "144.0x144.0", it always comes back "72x72".
Running convert -density 144 drawing.svg drawing.png from the command line works as expected and generates a PNG that's twice as large as before, 2104x1488.
I'm using OS X 10.6.7, ImageMagick 6.7.0-0 (installed via MacPorts), RMagick 2.13.1, and Ruby 1.9.2p180. When I put my code into the context of a little Sinatra webapp on Heroku, it has the same incorrect behavior, so the issue does not seem to lie with OS X or MacPorts.
Density is about resolution (i.e. dots per inch), not the rendered size. From the fine manual:
The vertical and horizontal resolution in pixels of the image. The default is "72x72".
I think you're looking for resize or resize!:
Changes the size of the receiver to the specified dimensions.
You can specify the new size in two ways. Either specify the new width and height explicitly, or specify a scale factor, a number that represents the percentage change.
So this will work:
Magick::Image.read('drawing.svg').first.resize(2).write('drawing.png')
Or this:
img = Magick::Image.read('drawing.svg').first
img.resize!(2)
img.write('drawing.png')
I don't know why convert behaves differently than the library, there could be other default settings in play that have different defaults in the library or maybe -density does more than set the density.
If resize isn't doing the trick for you (and, based on your comments, it is happening too late to be of use), you can try setting the size parameter in the block:
img = Magick::Image.read('drawing.svg'){ |opts| opts.size = '2104x1488' }.first
Of course, you have to know how big the SVG is before hand. You're supposed to be able to specify things like 200%x200% for the geometry but read always ignores the flag on the Magick::Geometry when I try it.