I have been doing some benchmarking in Ruby and have the following results:
user system total real
part1 0.156000 0.000000 0.156000 ( 0.158009)
user system total real
part2 0.015000 0.000000 0.015000 ( 0.162010)
Commonly, as in part1, the total and real times are nearly the same. However this is not true in part 2.
What is the meaning of the total and real divergence in part2?
Does the divergence raise any concerns?
What run is faster?
user/system are cpu times, measured by the kernel. which have scheduled your
process.
real time is the time of calculation.
So real time bigger than user+system mean :
io or sleep in code tested
there a other process/daemon which consumes CPU
The results are organized in columns and are in this order; user CPU time, system CPU time, the sum of the user and system CPU times, and the elapsed real time. The units of them all are seconds. So in real time, part1 was faster than part2.
Related
Assume that a query is decomposed into a serial part and a parallel part. The serial part occupies 20% of the entire elapsed time, whereas the rest can be done in parallel.
Given that the one-processor elapsed time is 1 hour, what is the speed up if 10 processors are used? (For simplicity, you may assume that during the parallel processing of the parallel part the task is equally divided among all participating processors).
Even with n processors the serial part will take up the same share (0.2) of computation time as in the one-processor elapsed time (OPELT = 1 hour) scenario. The other 80% can be done in parallel, thus are divided by the number of processors available.
0.2*OPELT + (0.8*OPELT)/n
Speed up S(n) is then the ratio between the one-processor elapsed time and the n-processor elapsed time.
In detail to be found here.
The cputime (); returns the CPU time used by an Octave session. Here, a way is stated to measure the processing time of a code.
t = cputime;
...%your code
...%computing
...%when you are done
printf('Total cpu time: %f seconds\n', cputime-t);
In octave documentation we can use the etime(); function file, it returns the difference (in seconds) between two timestamps from the clock, like this:
t0 = clock ();
# many computations later...
elapsed_time = etime (clock (), t0);
What's the difference between the above two ways? The etime uses the hour of clock of my system? And what's a good way to collect the processing time of an code?
Your first code snippet with cputime measures the time your code was execuded on a CPU. The second snippet measures the wall clock time (btw: use tic and toc to measure this). The wall clock includes time where the CPU was executing some other threads or waiting for IO.
It is up to you what you want to measure. I normally use wall clock with tic/toc if I want to benchmark my code.
On single threaded apps the wall clock time is greater than or equal the cputime (because it includes the time the CPU was also processing some other threads). On multithreaded apps with multiple cores cputime can be greater than the wall clock time.
curious question on timing. When measuring wall clock time with any language such as python time.time() does the time include the CPU/System time.clock() time in it as well?
In Python, time.time() gives you the elapsed time (also known as wall time).That includes CPU time inasmuch as that's a subset of wall time but you cannot extract CPU time from time.time() itself.
For example, if your process runs for ten seconds but uses the CPU for only five of those seconds, the former includes the latter.
There are only two hard things in Computer Science: cache invalidation
and naming things.
-- Phil Karlton
My app is reporting CPU time, and people reasonably want to know how much time this is out of, so they can compute % CPU utilized. My question is, what's the name for the wall clock time times the number of CPUs?
If you add up the total user, system, idle, etc. time for a system, you get the total wall clock time, times the number of CPUs. What's a good name for that? According to Wikipedia, CPU time is:
CPU time (or CPU usage, process time) is the amount of time for which
a central processing unit (CPU) was used for processing instructions
of a computer program, as opposed to, for example, waiting for
input/output (I/O) operations.
"total time" suggests just wall clock time, and doesn't connote that over a 10 second span, a four-cpu system would have 40 seconds of "total time."
Total Wall Clock Time of all CPUs
Naming things is hard, why waste a good 'un once you've found it ?
Aggregate time: 15 of 40 seconds.
I need to calculate the cpu usage and aggregate it from proc file in linux
/proc/stat gives me data but how would i come to know the % used of cpu at time as
stat gives me the count of processes at cores running at any time which does not give me any idea of %use of cpu?
And i am coding this in Golang and have to do this w/o scripts
Thanks in advance!!
/proc/stat does not only give you the count of processes on each core. man proc will tell you the exact format of that file. Copied from it, here is the part you should be interested in:
/proc/stat
cpu 3357 0 4313 1362393
The amount of time, measured in units of USER_HZ
(1/100ths of a second on most architectures, use
sysconf(_SC_CLK_TCK) to obtain the right value), that the
system spent in user mode, user mode with low priority
(nice), system mode, and the idle task, respectively.
The last value should be USER_HZ times the second entry
in the uptime pseudo-file.
It is then easy to do the substraction of the idle field between two measures, which will give you the time spent not doing anything by this CPU. The other value that you can extract is the time doing something, which is the difference between two measures of:
time in user mode + time spent in user mode with low priority + time spent in system mode
You will then have two values; one, A, is expressing the time doing nothing, and the other, B, the time actually doing something. B / (A + B) will give you the percentage of time the CPU was busy.