Determining maximum memory use of a process - windows

I want to run a command and, when it's complete, have a record of the maximum memory use of the resulting process. For instance, I want something analogous to the 'time' command on Linux, where 'time foo' will run 'foo' and, when 'foo' exits, will print out the amount of CPU time that 'foo' took.
For my present application I need this to run on Windows, but if you know of a Linux-only program let me know too. (At the very least it'd be interesting, but it may also give me a lead to find a Windows equivalent.)

You can, if you have Vista (maybe 7 too, not sure). Go to start -> control panel -> system and maintenance -> administrative tools -> Reliability and Performance Monitor -> Performance Monitor -> Create new watch (green + symbol) -> Process -> Working Set -> [select a process below] and press Ok. You can log this, etc.
Screenshot: http://www.freeimagehosting.net/image.php?912df44d75.jpg

I don't know of a program that does this, but there are APIs.
If you're using .NET, use the Process.TotalProcessorTime property.
If you're using native code, use the GetProcessTimes() function.

I've created a cmd exe for anaylsing memory usage of long time running program.
see here:MemoryUsageMonitor

I have created a simple Windows program called timemem.exe that behaves similarly to /usr/bin/time on Linux/Mac OS X, and will show similar statistics, such as elapsed time, user and kernel CPU time, and maximum working set size in memory used by another Win32 process. See:
http://homepage.mac.com/jafingerhut/files/code/code.html

Related

Bash Cluster Monitor - Help for assignment

I am a student at university and I am stuck with some code for my assignment, and would appreciate the help from the community with this. I am very new to bash, and I would say that I do not know how to write bash at all, I struggle with it. I much prefer python or C# lol. Anyways, I am required to create a cluster monitor. I have got to create a simple menu with 2 options that allows the user to choose between the options "Cluster Status" and "Process Analytics". I have created the menu, it's very basic, and the code for that is below:
#!/bin/bash
clear
echo="Choose one of the following options: "
echo="1) Cluster Status"
echo="2) Process Analytics"
echo="3) Exit"
read ans:
if ["$ans"=='1']
then
echo="Loading..."
bash Cluster_Status.sh
elif ["$ans"=='2']
then
echo="Loading..."
bash Process_Analytics.sh
elif ["$ans"=='3']
then
echo="Loading..."
In this menu, I also need to make it so that when it has completed execution, it pauses and returns to the main menu.
I have got 5 nodes for the cluster, with sample data in them, which can be provided if anyone wants to help with this.
The part that I am most stuck with is the creating of the cluster status and the process analytics part of this.
The cluster status is a script that provides a screen and file printout of the current stats of the entire cluster, with all of its parameters set in a nodeconfig.read.me file, which I have got. The stats need to be presented either in a sum or as an average. For example, the total amount of CPUs would be a sum, and the total CPU load would be an average. I need to do this in a table, which I cannot figure out how to do, as I am limited by the amount of different functions I can use.
The process analytics section asks me to process the current processes that are running on each node, if there are any, and I need to present the following stats onscreen and also have it saved to a file:
• Most popular process (in terms of instances running)
• Most CPU demanding process (in terms of CPU usage)
• Most MEM demanding process (in terms of Memory usage)
• Most Disk demanding process (in terms of Disk usage)
• Most Net demanding process (in terms of Network usage)
• 5 top users of CPU/MEM/DISK & Net (separate tables)
I have got to do this in PuTTy, so I am limited to the built in functions of PuTTy, cat, grep, ls, pipe, echo, tr, tee, cut, touch, head, tail. These are the only things that I can use when writing this, and because of the limitation, I am stuck when trying to find how to write this. I have researched how to do this for the last week, and I still do not know how to complete this. I do not ask for this assignment to be completed by anyone. I just want to ask for help on how to use these commands to be able to create this script. Thank you for your help from now, and I am sorry if this is posted in the wrong area. I am still fairly new with using stackoverflow.

CPU Time of Tasks in Ada

I have a program in which there are about 10 tasks running at the same time. They are all calling the same function with different values, and I want to know how long it takes for each one of my tasks to execute this function. However, it seems to me that I can't use something like that :
A:=Clock;
MyFunction(...);
B:=Clock;
Time:= B-A;
Indeed, I think this would not return the "real" CPU Time, but the time elapsed between the beginning and the end of the function, which is not correct, because my tasks could be switched when they're executing the function. So, I'm wondering if there is a way to know the "real" CPU time of each one of my tasks, i.e. the time they really spent using the CPU ?
I think you need the facilities of Annex D.14:
"This subclause describes a language-defined package to measure execution time."
"The execution time or CPU time of a given task is defined as the time spent by the system executing that task, including the time spent executing run-time or system services on its behalf."
This is specified in Ada 2005 and Ada 2012. Whether it's available in your compiler on your operating system is another issue! For example, "Execution_Time is not supported in this configuration" on Mac OS X (GNAT GPL 2013, GCC 4.8).
Burns & Welling "Concurrent and Real-time Programming in Ada" describes this as an execution-time clock. (ch15.5 if you can get hold of a copy) and this was added in Ada-2005.
You would be looking for a package in your installation called "Ada.Execution_Time". Hopefully its package spec can tell you all you need to know.
There would probably also be a child package "Ada.Execution_Time.Timers" which would let you set up events to occur , e.g. when a CPU time budget is exceeded.
On my system, using Gnat installed in /usr/local/bin, the relevant file is:
/usr/local/lib/gcc/x86_64-unknown-linux-gnu/4.7.2/adainclude/a-exetim.ads
and the function you need is Ada.Execution_Time.Clock.

How can I pause execution of a Ruby script and serialize the process to disk?

One script takes lot of time. I would like to pause it (e.g. by pressing p) and save it to HDD (e.g. by pressing s) so I can resume it later from HDD. Libraries like Thread or gems like Celluloid may pause some part of the code, but as far I have seen they cannot save the current process to disk.
Ideally, I would like to put a few lines of codes at the beginning of script or something easy like this.
TL;DR
You are solving the wrong problem. If your script takes too long to run, speed up your script rather than try to serialize an OS process.
Alternative Approaches
If you insist on being able to freeze processes and save state to disk, you may want to consider running your processes inside a virtual machine like VirtualBox or VMware. Both of these products support the ability to pause a virtual machine and save the VM's current state to disk.
I'm unaware of any way to store running OS processes on disk other than inside some sort of virtualization layer. If you really need this functionality, that's the way I'd recommend. However, you'll probably get more bang for your buck by improving the efficiency of your code (profile or benchmark for bottlenecks), scaling up your system, or scaling out your program's tasks in a distributed way.

How to check Matplotlib's speed in Xcode and increase performance?

I'm running into some considerable speed bottlenecks with a Python-Matplotlib-Xcode combination. I know some immediate responses will probably ask "Why are you doing python stuff in Xcode, just man up and use vim" --> I like the organizing ability and the built in version control, it makes elements of my work easier to deal with.
Getting python to run in xcode in the first place was a bit more tricky than I had hoped, but its possible. Now I have the following scenario:
A master file, 'main.py' does all the import stuff for me and sets up some universal formatting to make all the figures (for eventual inclusion in my PhD thesis) nice and uniform. Afterwards it runs a series of execfile commands to generate whichever graphics I need. Two things I can think of right off the bat:
1) at the very beginning of main.py after I import all the normal python stuff you tend to need, I call a system script which checks whether a certain filesystem is mounted. I keep all my climate model data on there since my local hard drive is too small to deal with all of it at once. Python pauses itself and waits for the system to do its thing, but once the filesystem has been found, it keeps going. Usually this only needs to happen once in the morning when I get to work, or if the VPN server kicked me off for whatever reason. (Side question, it'd be cool to know if theres a trick to automate an VPN login to reconnect as soon as it notices its not connected)
2) I'm not sure how much xcode is using on its own. running the same program from terminal is (somewhat) faster. I've tried to be memory conscience and turn off stuff I don't need while running the python/xcode combination.
Also, python launches a little window whenever I call plt.show(), this in itself takes time, I've considered just saving them as quick png files and opening them with some other viewer, although I guess that would also have to somehow take time to open up. Given how often these graphics change as I add model runs or think of nicer ways of displaying the data, it'd be nice to not waste something on the order of 15 to 30 minutes (possibly more) out of the entire day twiddling my thumbs and waiting for a window to pop up.
Benchmark it!
import datetime
start = datetime.datetime.now()
# your plotting code
td = datetime.datetime.now() - start
print td.total_seconds() # requires python version >= 2.7
Run it in xcode and from the command line, see what the difference is.

how to stop a running script in Matlab [duplicate]

This question already has an answer here:
How to abort a running program in MATLAB?
(1 answer)
Closed 7 years ago.
I write a long running script in Matlab, e.g.
tic;
d = rand(5000);
[a,b,c] = svd(d);
toc;
It seems running forever. Becasue I press F5 in the editor window. So I cannot press C-Break to stop in the Matlab console.
I just want to know how to stop the script. I am current use Task Manager to kill Matlab, which is really silly.
Thanks.
Matlab help says this-
For M-files that run a long time, or that call built-ins or MEX-files that run a long time, Ctrl+C does not always effectively stop execution. Typically, this happens on Microsoft Windows platforms rather than UNIX[1] platforms. If you experience this problem, you can help MATLAB break execution by including a drawnow, pause, or getframe function in your M-file, for example, within a large loop. Note that Ctrl+C might be less responsive if you started MATLAB with the -nodesktop option.
So I don't think any option exist. This happens with many matlab functions that are complex. Either we have to wait or don't use them!.
If ctrl+c doesn't respond right away because your script is too long/complex, hold it.
The break command doesn't run when matlab is executing some of its deeper scripts, and either it won't log a ctrl sequence in the buffer, or it clears the buffer just before or just after it completes those pieces of code. In either case, when matlab returns to execute more of your script, it will recognize that you are holding ctrl+c and terminate.
For longer running programs, I usually try to find a good place to provide a status update and I always accompany that with some measure of time using tic and toc. Depending on what I am doing, I might use run time, segment time, some kind of average, etc...
For really long running programs, I found this to be exceptionally useful
http://www.mathworks.com/matlabcentral/fileexchange/16649-send-text-message-to-cell-phone/content/send_text_message.m
but it looks like they have some newer functions for this too.
MATLAB doesn't respond to Ctrl-C while executing a mex implemented function such as svd. Also when MATLAB is allocating big chunk of memory it doesn't respond. A good practice is to always run your functions for small amount of data, and when all test passes run it for actual scale. When time is an issue, you would want to analyze how much time each segment of code runs as well as their rough time complexity.
Consider having multiple matlab sessions. Keep the main session window (the pretty one with all the colours, file manager, command history, workspace, editor etc.) for running stuff that you know will terminate.
Stuff that you are experimenting with, say you are messing with ode suite and you get lots of warnings: matrix singular, because you altered some parameter and didn't predict what would happen, run in a separate session:
dos('matlab -automation -r &')
You can kill that without having to restart the whole of Matlab.
One solution I adopted--for use with java code, but the concept is the same with mexFunctions, just messier--is to return a FutureValue and then loop while FutureValue.finished() or whatever returns true. The actual code executes in another thread/process. Wrapping a try,catch around that and a FutureValue.cancel() in the catch block works for me.
In the case of mex functions, you will need to return somesort of pointer (as an int) that points to a struct/object that has all the data you need (native thread handler, bool for complete etc). In the case of a built in mexFunction, your mexFunction will most likely need to call that mexFunction in the separate thread. Mex functions are just DLLs/shared objects after all.
PseudoCode
FV = mexLongProcessInAnotherThread();
try
while ~mexIsDone(FV);
java.lang.Thread.sleep(100); %pause has a memory leak
drawnow; %allow stdout/err from mex to display in command window
end
catch
mexCancel(FV);
end
Since you mentioned Task Manager, I'll guess you're using Windows. Assuming you're running your script within the editor, if you aren't opposed to quitting the editor at the same time as quitting the running program, the keyboard shortcut to end a process is:
Alt + F4
(By which I mean press the 'Alt' and 'F4' keys on your keyboard simultaneously.)
Alternatively, as mentioned in other answers,
Ctrl + C
should also work, but will not quit the editor.
if you are running your matlab on linux, you can terminate the matlab by command in linux consule.
first you should find the PID number of matlab by this code:
top
then you can use this code to kill matlab:
kill
example:
kill 58056
To add on:
you can insert a time check within a loop with intensive or possible deadlock, ie.
:
section_toc_conditionalBreakOff;
:
where within this section
if (toc > timeRequiredToBreakOff) % time conditional break off
return;
% other options may be:
% 1. display intermediate values with pause;
% 2. exit; % in some cases, extreme : kill/ quit matlab
end

Resources