This question already has an answer here:
save high resolution figures with parfor in matlab
(1 answer)
Closed 8 years ago.
I've got a ~1600 line program that reads in images (either tiff or raw), performs a whole bunch of different mathematical and statistical analyses, and then outputs graphs and data tables at the end.
Almost two-thirds of my processing time is due to looping 16 times over the following code:
h = figure('Visible','off','units','normalized','outerposition',[0 0 1 1]);
set(h,'PaperPositionMode','auto');
imagesc(picdata); colormap(hot);
imgtmp = hardcopy(h,'-dzbuffer','-r0');
imwrite(imgtmp,hot,'picname.png');
Naturally, 'picname.png' and picdata are changing each time around.
Is there a better way to invisibly plot and save these pictures? The processing time mostly takes place inside imwrite, with hardcopy coming second. The whole purpose of the pictures is just to get a general idea of what the data looks like; I'm not going to need to load them back into Matlab to do future processing of any sort.
Try to place the figure off-screen (e.g., Position=[-1000,-1000,500,500]). This will make it "Visible" and yet no actual rendering will need to take place, which should make things faster.
Also, try to reuse the same figure for all images - no need to recreate the figure and image axes and colormap every time.
Finally, try using my ScreenCapture utility rather than hardcopy+imwrite. It uses a different method for taking a "screenshot" which may possibly be faster.
Related
I have sequences of images (2 images in each sequence). I am trying to use CONVLSTM2D to train on this sequence.
Question:
Can I train LSTM model on just 2 images per sequence? The goal would be, prediction of second image from the first image.
Thanks!
You can, but is this the best to do? (I don't know either).
I think that using a sequence of two steps won't bring a lot of extra intelligence, it's just an input -> output pair in the end.
You could also simply put one image as input and the other as output in a sort of U-Net.
But many of these things must be tested for our surprise. Maybe the way things are made inside the LSTM, with gates and such could add some interesting behavior?
I am trying to use PSD.rb to generate preview PNGs based on layer comps. When I run the script it takes roughly 12 minutes to produce 1 PNG.
This seems like quite a long time to me, as the PSD in not overly complicated. There are ~30 layers of which ~10 have filters, masks, etc...
The image size is 1800 x 1114 # 72ppi. Is it reasonable for PSD.rb to take this long?
Besides the amount of time it takes the real issue is that the outputted PNG does not look like the layer comp. Pieces of the Comp are there but there are major discrepancies.
Does PSD.rb have issues applying items like masks, filters, or anything of the like?
Does anyone know how I might troubleshoot this?
In the documentation they mention adding PSD.debug = true This dumps a bunch of output but nothing really seems to jump out as problematic.
Here is a link to the lib: https://github.com/layervault/psd.rb
The script I have created is very simple:
require 'psd'
psd = PSD.new('dusk.psd')
psd.parse!
psd.tree.filter_by_comp('Layer Comp 1').save_as_png('./Version A.png')
I am working on drawing graphs with Gnuplot.
The thing is as it works on, due to high memeory usage, it does not work properly, or be killed in a few minutes.
My laptops memory is 4GB. And the file size if around 1GB to 1.5 GB.
Actually, I am a beginner of C language and gnuplotting. What I cannot understand is that why this 'simple-looking' work takes so many memories. It's just matching points of t and x between.
I'll write down a part of the file below. And the code I wrote down on the terminal was;
plot "fl1.dat" u 1:2 linetype 1.
1.00000e+00 1.88822e-01
2.00000e+00 3.55019e-01
3.00000e+00 -1.74283e+00
4.00000e+00 -2.67627e+00
...
...
...
Is only way I can do is add more RAM, or using computer on lab?
Thank you.
Plotting of a data file is done to see the overall or global behavior of some quantity, not the local behavior for which you can just see the value from the data file. This, said, in your case, I think you do not need to plot each and every point from the file since the file is huge and it seems pointless to plot it all. Thus I suggest the following:
pl 'fl1.dat' u 1:2 every 10
This will plot every 10'th point only but if anyway there are two many points spaced very finely, then that would still show a global behavior of the plot nicely. Remember that this won't connect the individual points. If you still want a continuous line, I suggest to create another data file with every 10th file in it and then plot it as usual with lines.
Another thing to note is that the choice of output terminal can have a tremendous effect on the memory consumption: interactive windows or vector formats will consume much more (I guess because these formats keep track of every single data-point, even though, as stressed by Peaceful, you probably don't need all those points). So a quick way to reduce the memory consumption may be to set the output terminal to a modestly-sized png, eg:
set terminal png size 1000,1000
set output "mygraph.png"
I have a working MATLAB program measures data to tune a machine in real-time using the SOAP library from MATLAB (several measurements per second). It was working well, updating two figures, each containing four sub-plots as the tuning proceeds. Recently the plot has stopped updating, just showing a grey box (actually two shades of grey if you re-size the window).
I can tell that the program is running properly from debug information written to the MATLAB console. Also, sometimes the plots update in a burst, adding many new points, when they should be updating with every new point.
I have made several small changes to the code to reduce the comms traffic, but my biggest recent change is to all lots of calls to toc to measure the time taken in various parts of the code, with a single tic at the start.
Is it possible these extra timing calls could be suppressing the plots?
Here is a cut down copy of my code. It is a nested function that makes use of some configuration data from the top level function. The figure is one of two created in the top level function then completely redrawn as new data arrives.
function acc_plot(accFig, accData)
figure(accFig);
sp1 = '221';
% Plot current vs raw position
subplot(sp1);
plot(xRawPos,yCfbDcM,'r.', xRawPos,yCfbDcP,'b.')
hold on;
if tuneConfig.dualSensors(accData.axisIx)
plot(xRawPosB,yCfbDcM,'g.', xRawPosB,yCfbDcP,'m.')
end
title(['acc check ' tuneConfig.axisNames{accData.axisIx}])
ylabel('CFBM(r)/CFBP(b) [A]')
xlabel(xPosLabel)
grid on
end
Add "drawnow" to your function to force refresh.
I am using GNUplot for plotting a small matrix.
The matrix is 100x100 by size. e.g.
1.23212 2.43123 -1.24312 ......
-4.23123 2.00458 5.60234 ......
......
The data is not neatly stored in the file.
So from C++ point of view, due to the lack of length per data, there is no way to load a whole number, but it has to check when the number is loading. I guess this should be the reason of the slow plotting speed.
Now I have 3 questions:
Q1: Is loading the bottle neck?
Q2: If I can make the data file neatly stored. e.g.
1.23212 2.43123 -1.24312 ......
-4.23123 2.00458 5.60234 ......
......
Does the plotting speed get any improvement? (Maybe GNUplot can check what the pattern is. Thus improve the loading speed. Not sure.)
Q3: Any other options that I can set to make it faster?
Edit
I tried these:
-3.07826e-21 -2.63821e-20 -1.05205e-19 -3.25317e-19 -9.1551e-19
when outputting used setw to make sure there they are aligned. But I think I still need to tell GNUplot to load 13 characters at one time, then perform strtod.
I would guess, in order to fit a general case, where there is no information for the length of the number, it is safe to do it digit by digit until it there is a space.