I am working on a project wherein I need to process video 1 by one and run my algorithm to extract scores from it. The problem is videos are taking too much time to process. I tried to parallelize code using parfor in few places but performance is still bad. How can I improve perfromance? Is there any way of caching frames? I am reading each frame and processing it.
Any suggestion is welcome.
Caching is certainly an option, but it may not help. It's very hard to speed up code if you don't know what is slow. Use Matlab's profiler to find the slow parts, then work on speeding those parts up. After that, profile again to see what effect your changes had.
Here's the basic way to use the profiler:
profile on
% call your function here
profile off
profile report
first,do you make sure your code is support parallel , and you had run matlabpool to open CPU parallel computation
second, maybe you need to optimized your code
thirdly, you can try GPU parallel computation
Related
I have a complex and time-consuming experiment to launch in pychopy. Do you have any recommendations to speed up the reproduction of the whole experiment? I would like to analyze faster the stimuli and order they have been presented on the spreadsheet they are exported to.
Please just be explicit in the case, if you would use some specific code and where you recommend inserting the routine/component.
Thanks
I'm writing a MATLAB optimization code, and the code itself works, but I'll need to run it multiple times in another optimization process so I wanted to try and speed it up. Using the profiler I was able to detect that the slower parts of the program were all the "TR.Points(...)" and "TR.ConnectivityList(...)". Are these queries supposed to be this slow, or are there more efficient ways to call them?
Please forgive me if the terms are inaccurate or this is the wrong forum.
Essentially I write sketches in Processing and I am struggling to find out why my code runs slow.
Sometimes a sketch runs fast and I have no idea why other than there are less lines of code. Sometimes a different sketch runs slow and I have no idea why.
I am curious if there is a way within the Processing IDE, or maybe a general tool, to determine or analyze which lines of code are causing the sketch to run slow?
Such as a way to isolate "Oh, these lines are causing it to run the slowest. Looks like it's a section of this loop. Maybe I should concentrate on improving this function rather than searching for a needle in a haystack."
Similar to how when my computer is running slow I can use task manager to take a look at which programs are running slow and then adjust. I don't just guess. Or develop an unfounded penchant for quitting one program over another.
I of course could upload my sketch but this is a example independent problem I am trying to get better at solving. I would like to find a way to analyze all sketches. Or even code in different languages, etc.
If there is no general tool for analyzing a Processing sketch how do people go about this? Surely there must be a better method than trial and error, brute force, intuition, or otherwise. Of course those methods could yield better running code but there must be some formal analysis.
Even if you didn't have anything specific to share for Processing any suggestions on search terms, subjects, or topics would be appreciated as I have no idea how to proceed other than the brute force/trial and error method.
Thank you,
Without an example code snippet I can only provide a couple of general approaches.
The simplest thing you could do is use time sections of code and add print statements (poor man's profiler). The idea is to take a time snapshot before running a function/block of code you suspect is slow, take another one after then print the difference. The chunks of code with the largest time difference is what you want to focus on. Here's a minimal example, assuming doSomethingInteresting() is a function that you suspect is slow
// get the current millis since the sketch started
int before = millis();
// call the function here
doSomethingInteresting();
// get the millis again, after the function and calculate the time difference
int diff = millis() - before;
println("executed in ~" + diff + "ms");
If you use Processing's default Java mode you can use VisualVM to sample your PApplet. In your case you will want to sample CPU usage and the nice thing about VisualVM is that you will the results sorted by the slowest function calls (the first you should improve) and you can drill down to see what is your code versus what is part of the runtime or other native parts of code.
I've been programming some MATLAB GUIs (not using GUIDE), mainly for viewing images and some other simple operations (such as selecting points and plotting some data from the images).
When the GUI starts, all the operations are performed quickly.
However, as the GUI is used (showing different frames from 3D/4D volumes and perfoming the operations mentioned above), it starts getting progressively slower, reaching a point where it is too slow for common usage.
I would like to hear some input regarding:
Possible strategies to find out why the GUI is getting slower;
Good MATLAB GUI programming practices to avoid this;
Possible references that address these issues.
I'm using set/getappdata to save variables in the main figure of the GUI and communicate between functions.
(I wish I could provide a minimal working example, but I don't think it is suitable in this case because this only happens in somewhat more complex GUIs.)
Thanks a lot.
EDIT: (Reporting back some findings using the profiler:)
I used the profiler in two occasions:
immediatly after starting the GUI;
after playing around with it for some time, until it started getting too slow.
I performed the exact same procedure in both profiling operations, which was simply moving the mouse around the GUI (same "path" both times).
The profiler results are as follows:
I am having difficulties in interpreting these results...
Why is the number of calls of certain functions (such as impixelinfo) so bigger in the second case?
Any opinions?
Thanks a lot.
The single best way I have found around this problem was hinted at above: forced garbage collection. Great advice though the command forceGarbageCollection is not recognized in MATLAB. The command you want is java.lang.System.gc()... such a beast.
I was working on a project wherein I was reading 2 serial ports at 40Hz (using a timer) and one NIDAQ at 1000Hz (using startBackground()) and graphing them all in real-time. MATLAB's parallel processing limitations ensured that one of those processes would cause a buffer choke at any given time. Animations would not be able to keep up, and eventually freeze, etc. I gained some initial success by making sure that I was defining a single plot and only updating parameters that changed inside my animation loop with the set command. (ex. figure, subplot(311), axis([...]),hold on, p1 = plot(x1,y1,'erasemode','xor',...); etc. then --> tic, while (toc<8) set(p1,'xdata',x1,'ydata',y1)...
Using set will make your animations MUCH faster and more fluid. However, you will still run into the buffer wall if you animate long enough with too much going on in the background-- especially real-time data inputs. Garbage collection is your answer. It isn't instantaneous so you don't want it to execute every loop cycle unless your loop is extremely long. My solution is to set up a counter variable outside the while loop and use a mod function so that it only executes every 'n' cycles (ex. counter = 0; while ()... counter++; if (~mod(counter,n)) java.lang.System.gc(); and so on.
This will save you (and hopefully others) loads of time and headache, trust me, and you will have MATLAB executing real-time data acq and animation on par with LabVIEW.
A good strategy to find out why anything is slow in Matlab is to use the profiler. Here is the basic way to use the profiler:
profile on
% do stuff now that you want to measure
profile off
profile viewer
I would suggest profiling a freshly opened GUI, and also one that has been open for a while and is noticeably slow. Then compare results and look for functions that have a significant increase in "Self Time" or "Total Time" for clues as to what is causing the slowdown.
I am searching for good (preferably plug-and-play) solutions for performing diagnostics on software I am developing. The software I am working on has several components that require extensive computing resources, and so we're attempting to capture the performance of these components for two reasons: 1) estimate required computing resources and thus the costs of running the software, and 2) quantify what an "improvement" is for the component (i.e. if we modify the code and speed increases, then it's an improvement). Our application is composed of a search engine plus many other components, and understanding the speed of the search engine is also critical to the end-user.
It seems to be hard to search for a solution since I'm not sure how to properly define my problem. But what I've found so far seems to be basic error logging techniques. A solution whose purpose is to run statistics (e.g. statistical regressions) off of the data would be best. Maybe unit testing frameworks have built-in test timers, but we need to capture data from live runs of our application to account for the numerous different scenarios.
So really there are two questions:
1) Is there a predefined solution for these sorts of tests?
2) Is there any good reference for running statistical regressions on this kind of data? Let's say we captured execution time of the script and size of the input data (e.g. query). We can regress time on data size to understand the effect of changing the data size on the execution time. But these sorts of regressions are tricky since it's not clear what all of the relevant variables are. Any reference to analyzing performance data would be excellent, and benefit to many people I believe!
Thanks
Matt
Big apps like these are going to be doing a lot of non-CPU processing,
so to find optimization points
you're going to need wall-clock-based, not CPU-based, sampling.
gprof and some others only sample on CPU time, so they cannot see needless I/O or other system calls.
If you do manage to find and remove CPU-intensive performance problems, the I/O-intensive ones will only become a larger fraction of the time.
Take a look at Zoom.
It's a stack sampler that reports, by line of code, the percent of wall-clock time that line is on the stack.
Any code point worth optimizing will probably be such a line.
It also has a nice butterfly view for browsing the call graph.
(You don't want the call graph as a whole. It will be a meaningless rat's nest.)