Can I speed up the queries of triangulation properties in matlab? - performance

I'm writing a MATLAB optimization code, and the code itself works, but I'll need to run it multiple times in another optimization process so I wanted to try and speed it up. Using the profiler I was able to detect that the slower parts of the program were all the "TR.Points(...)" and "TR.ConnectivityList(...)". Are these queries supposed to be this slow, or are there more efficient ways to call them?

Related

How to analyze the computational overhead of a Processing Sketch

Please forgive me if the terms are inaccurate or this is the wrong forum.
Essentially I write sketches in Processing and I am struggling to find out why my code runs slow.
Sometimes a sketch runs fast and I have no idea why other than there are less lines of code. Sometimes a different sketch runs slow and I have no idea why.
I am curious if there is a way within the Processing IDE, or maybe a general tool, to determine or analyze which lines of code are causing the sketch to run slow?
Such as a way to isolate "Oh, these lines are causing it to run the slowest. Looks like it's a section of this loop. Maybe I should concentrate on improving this function rather than searching for a needle in a haystack."
Similar to how when my computer is running slow I can use task manager to take a look at which programs are running slow and then adjust. I don't just guess. Or develop an unfounded penchant for quitting one program over another.
I of course could upload my sketch but this is a example independent problem I am trying to get better at solving. I would like to find a way to analyze all sketches. Or even code in different languages, etc.
If there is no general tool for analyzing a Processing sketch how do people go about this? Surely there must be a better method than trial and error, brute force, intuition, or otherwise. Of course those methods could yield better running code but there must be some formal analysis.
Even if you didn't have anything specific to share for Processing any suggestions on search terms, subjects, or topics would be appreciated as I have no idea how to proceed other than the brute force/trial and error method.
Thank you,
Without an example code snippet I can only provide a couple of general approaches.
The simplest thing you could do is use time sections of code and add print statements (poor man's profiler). The idea is to take a time snapshot before running a function/block of code you suspect is slow, take another one after then print the difference. The chunks of code with the largest time difference is what you want to focus on. Here's a minimal example, assuming doSomethingInteresting() is a function that you suspect is slow
// get the current millis since the sketch started
int before = millis();
// call the function here
doSomethingInteresting();
// get the millis again, after the function and calculate the time difference
int diff = millis() - before;
println("executed in ~" + diff + "ms");
If you use Processing's default Java mode you can use VisualVM to sample your PApplet. In your case you will want to sample CPU usage and the nice thing about VisualVM is that you will the results sorted by the slowest function calls (the first you should improve) and you can drill down to see what is your code versus what is part of the runtime or other native parts of code.

ode45 mex file runs slow

This is the function trial2 which creates the system of differential equations:
function xdot=trial2(t,x)
delta=0.1045;epsilon=0.0048685;
xdot=[x(2);(-delta-epsilon*cos(t))*x(1)-0.7*delta*abs(x(1))];
Then, I try and solve this using ode45:
[t,x]=ode45('trial2',[0 10000000],[0;1]);
plot(t,x(:,1),'r');
But, this takes approximately 15 minutes. So I tried to improve the run time by creating a mex file. I found an ode45 mex equivalent function ode45eq.c. Then, I tried to solve by:
mex ode45eq.c;
[t,x]=ode45eq('trial2',[0 10000000],[0;1]);
plot(t,x(:,1),'r');
But this seems to run even slower. What could be the reason and how can I improve the run time to 2–3 minutes? I have to perform a lot of these computations. Also, is it worth creating a standalone C++ file to solve the problem faster?
Also, I am running this on an Intel i5 processor 64-bit system with 8 gb RAM. How much gain in speed do you think I can get if I move to a better processor with say 16 gb RAM?
With current versions of Matlab, it's very unlikely that you'll see any performance improvement by compiling ode45 to C/C++ mex. In fact, the compiled version will almost certainly be slower as you found. There's a good reason that ode45 is written in pure Matlab, as opposed to being compiled down to a native C function: it has to call user functions written in Matlab on every iteration. Additionally, Matlab's ode45 is a very dynamic function that is capable of interacting with the Matlab environment in many ways during the course of integration (plotting output functions, event detection, interpolation, etc.). It's also probably more straightforward to safely handle dynamic memory allocation in Matlab, than in C.
Your C code calls your user function via mexCallMATLAB. This function is not really meant for repeated calls, especially if they transfer data back and forth. Doing what you're trying to do would likely require new mex APIs and possibly changes to the Matlab language.
If you want faster numerical integration, you're going to have to give up the convenience of being able to write your integrations functions (i.e., trial2 in your example) in Matlab. You'll need to "hard code" your integration functions and compile them along with the integration scheme itself. With a detailed knowledge about your problem, and decent programming skills, you can write a tight integration loop and it may be possible to achieve an order of magnitude speedup in some cases.
Lastly, your trial2 function has an absolute value as well as an oscillating trigonometric function in it. Is this differential equation stiff? Have you tried other solvers, e.g., ode15s? Compare the outputs even over a shorter time period. And you may find that you get a bit of a speed-up (~25% on my machine) if you use the modern way of passing function handles instead of strings to ode45:
[t,x] = ode45(#trial2,[0 10000000],[0;1]);
The trial2 function can still be in a separate M-file or it can be a sub-function in the same file with your call to ode45 (this file need to be a function file, not a script of course).
What you did is basically replacing the todays implementation with the implementation of 1993, you have shown that mathworks did a great job improving the performance of the ode45 solver.
I don't see any possibility to improve the performance in this case, you can assume that such a fundamental part of MATLAB like ode45 is implemented in a optimal way, replacing it with some other code to mex it is not the solution. Everything you could gain using a mex function is cutting of the overhead of input/output handling which is implemented in m. Probably less than 0.1s execution time.

MATLAB: GUI progressively getting slower

I've been programming some MATLAB GUIs (not using GUIDE), mainly for viewing images and some other simple operations (such as selecting points and plotting some data from the images).
When the GUI starts, all the operations are performed quickly.
However, as the GUI is used (showing different frames from 3D/4D volumes and perfoming the operations mentioned above), it starts getting progressively slower, reaching a point where it is too slow for common usage.
I would like to hear some input regarding:
Possible strategies to find out why the GUI is getting slower;
Good MATLAB GUI programming practices to avoid this;
Possible references that address these issues.
I'm using set/getappdata to save variables in the main figure of the GUI and communicate between functions.
(I wish I could provide a minimal working example, but I don't think it is suitable in this case because this only happens in somewhat more complex GUIs.)
Thanks a lot.
EDIT: (Reporting back some findings using the profiler:)
I used the profiler in two occasions:
immediatly after starting the GUI;
after playing around with it for some time, until it started getting too slow.
I performed the exact same procedure in both profiling operations, which was simply moving the mouse around the GUI (same "path" both times).
The profiler results are as follows:
I am having difficulties in interpreting these results...
Why is the number of calls of certain functions (such as impixelinfo) so bigger in the second case?
Any opinions?
Thanks a lot.
The single best way I have found around this problem was hinted at above: forced garbage collection. Great advice though the command forceGarbageCollection is not recognized in MATLAB. The command you want is java.lang.System.gc()... such a beast.
I was working on a project wherein I was reading 2 serial ports at 40Hz (using a timer) and one NIDAQ at 1000Hz (using startBackground()) and graphing them all in real-time. MATLAB's parallel processing limitations ensured that one of those processes would cause a buffer choke at any given time. Animations would not be able to keep up, and eventually freeze, etc. I gained some initial success by making sure that I was defining a single plot and only updating parameters that changed inside my animation loop with the set command. (ex. figure, subplot(311), axis([...]),hold on, p1 = plot(x1,y1,'erasemode','xor',...); etc. then --> tic, while (toc<8) set(p1,'xdata',x1,'ydata',y1)...
Using set will make your animations MUCH faster and more fluid. However, you will still run into the buffer wall if you animate long enough with too much going on in the background-- especially real-time data inputs. Garbage collection is your answer. It isn't instantaneous so you don't want it to execute every loop cycle unless your loop is extremely long. My solution is to set up a counter variable outside the while loop and use a mod function so that it only executes every 'n' cycles (ex. counter = 0; while ()... counter++; if (~mod(counter,n)) java.lang.System.gc(); and so on.
This will save you (and hopefully others) loads of time and headache, trust me, and you will have MATLAB executing real-time data acq and animation on par with LabVIEW.
A good strategy to find out why anything is slow in Matlab is to use the profiler. Here is the basic way to use the profiler:
profile on
% do stuff now that you want to measure
profile off
profile viewer
I would suggest profiling a freshly opened GUI, and also one that has been open for a while and is noticeably slow. Then compare results and look for functions that have a significant increase in "Self Time" or "Total Time" for clues as to what is causing the slowdown.

Implementing caching for fast processing Videos in MATLAB

I am working on a project wherein I need to process video 1 by one and run my algorithm to extract scores from it. The problem is videos are taking too much time to process. I tried to parallelize code using parfor in few places but performance is still bad. How can I improve perfromance? Is there any way of caching frames? I am reading each frame and processing it.
Any suggestion is welcome.
Caching is certainly an option, but it may not help. It's very hard to speed up code if you don't know what is slow. Use Matlab's profiler to find the slow parts, then work on speeding those parts up. After that, profile again to see what effect your changes had.
Here's the basic way to use the profiler:
profile on
% call your function here
profile off
profile report
first,do you make sure your code is support parallel , and you had run matlabpool to open CPU parallel computation
second, maybe you need to optimized your code
thirdly, you can try GPU parallel computation

When should I use AsParallel() in linq/plinq

I'm looking make use of the advantages of parallel programming in linq by using plinq, im not sure I understand the use entirely apart from the fact its going to make use of all cpu cores more efficiently so for a large query it might be quicker. Can I just simply call AsParallel() on linq calls to make use of th eplinq functionality and it will always be quicker? Or should I only use it when there is a lot of data to query or process?
You can't just assume that execution in parallel is always faster. It depends. In some situations you will gain a lot on multi-core processors by doing things in parallel. In other cases, you will just slow the things down, since parallel loops have a small overheat over simple loops.
For example, see my other answer which explains why embedded parallel loops can be a disaster.
Now, the best way to know if it is a good idea to use parallel loop in a precise context is to test both parallel and not parallel implementations and to measure the time they take.
To further add to the answer, it also depends on your data. Going a little 'old school' for a moment you could go down the road of loop unrolling, using for instead of foreach and so on and so forth.
However, you really need to ensure you aren't micro-optimising. Depending on your data fetches and the size of data (certainly with paged data) then you can probably get away with not using it.
That's not to say that making your linq mult-core aware isn't cool. But be aware of the setup costs of doing something like that and so be able to weigh up the benefits against the complexities of maintaining and debugging that code.
If your algorithm is already top notch then looking at the plinq extensions, a map reduce mechanism or similar may be the way to go. But first check your algorithm and your overall benefits. Operating on the right kind of collection (etc) in the right kind of way will always bring its own benefits (and problems!).
What are you trying to solve?

Resources