The last 3 lines of wget -i urls.txt:
FINISHED --2012-05-16 12:58:08--
Total wall clock time: 1h 56m 52s
Downloaded: 1069 files, 746M in 1h 52m 49s (113 KB/s)
There are two different times:
1h 56m 52s
1h 52m 49s
Why are they different? What do they stand for?
Wall clock time or real time is the human perception of passage of time. That will be the time as a human user, what we will experience. In this case wget might have took less than the real time to finish its job, but the real time is the sum of time the software took to finish its real job and the time it was waiting for resources like hard disk, network etc.
When you have wall clock time and a shorter time, the shorter time is usually user time and the missing time system time (time spend in the kernel) or time waiting for something like a file descriptor. (But I have not checked what's the case with wget). If you are curious start time wget http://some.url or look into /proc/<wget-pid>/stat while it's running (assuming you are running on linux).
Related
I was following the go blog here
I tried to profile my program but it looks a bit different. (Seems that go has moved from sampling to instrumentation?)
I wonder what these numbers mean
Especially showing nodes accounting for 2.59s, 92.5% of 2.8
What does total sample = 2.8s mean? The sample is drawn in an interval of 2.8 seconds?
Does it mean that only nodes that are running over 92.5% of sample
time are shown?
Also I wonder these numbers are generated. In the original go blog, the measure is how many times the function is detected in execution among all samples. However, we are dealing with seconds here. How does go profiling tool know how many seconds a function call takes.
Any help will be appreciated
Think of the graph as a graph of a resource, time. You'll start at the top with, for example, 10 seconds. Then you'll see that 5 seconds went to time.Sleep and 5 went to encoding/json. The particular divides in that time is represented by the arrows, so they show that 5 went to each part of the program. So now we have 3 nodes, the first node 10 seconds, time.Sleep 5 seconds, and encoding/json 5 seconds. Then those 5 seconds in encoding/json are broken down even further into the functions that took up most the time. The 0.01s (percentage) out of 0.02s (larger percentage) means that this function took 0.01s of processing time out of a total of 0.02s of the block of time (the arrow with the number) total by this particular call stack. The percentage represents the overall percentage of execution time this part took up from the whole pie. So you'll see that encoding/json string/encoder took 0.36 percent of the total execution time/resources of your program.
I create a new cell in my jupyter notebook.
I type %%time in the first line of my new cell.
I type some codes in the second line.
I run this cell and get some information as follows
CPU times: user 2min 8s, sys: 14.5 s, total: 2min 22s
Wall time: 1min 29s
My question is what does these parameters mean?
CPU times, user, sys, total(I think that it means user+total), Wall time
If we run the code below in a cell:
%%time
from time import sleep
for i in range(3):
print(i, end=' ')
sleep(0.1)
The output is:
0 1 2
CPU times: user 5.69 ms, sys: 118 µs, total: 5.81 ms
Wall time: 304 ms
The wall time means that a clock hanging on a wall outside of the computer would measure 304 ms from the time the code was submitted to the CPU to the time when the process completed.
User time and sys time both refer to time taken by the CPU to actually work on the code. The CPU time dedicated to our code is only a fraction of the wall time as the CPU swaps its attention from our code to other processes that are running on the system.
User time is the amount of CPU time taken outside of the kernel. Sys time is the amount of time taken inside of the kernel. The total CPU time is user time + sys time. The differences between user and sys time is well explained in the post:
What do 'real', 'user' and 'sys' mean in the output of time(1)?
I have a very simple task on updating database.
my $pm = new Parallel::ForkManager(15);
for my $line (#lines){
my $pid = $pm->start and next;
my $dbh2 = $dbh->clone();
my $sth2 = $dbh2->prepare("update db1 set field1=? where field2 =?");
my ($field1, $field2) = very_slow_subroutine();
$sth2->execute($field1,$field2);
$pm->finish;
}
$pm->wait_all_children;
I could just use $dbh2->do, but I doubt it a reason for a slowness.
What interesting, is that it seems it very fast starts these 15 processes (or whatever I specify) , but right after that slows drastically, still noticeable faster than without forking, but I would expect more...
Edit:
The very_slow_subroutine is sub which get an answer from a web service. The service can answer from fraction of second to several seconds on time out. I have to ask dozen thousands times... the reason I would like to make a fork.
And if this is matters -- I am on Linux.
Parallel::ForkManager doesn't magically make things faster, it just lets you do run your code multiple times and at the same time. In order to get the benefit out of it, you have to design your code for parallelism.
Think of it this way. It takes you 10 minutes to get to the store, shop, load your car, come back, and unload it. You need to get 5 loads. You alone can do it in 50 minutes. That is working in serial. 10 minutes * 5 trips one after the other = 50 minutes.
Let's say you get four friends to help. You all start off for the store at the same time. There's still 5 trips, and they still take 10 minutes, but because you did it in parallel the total time is only 10 minutes.
But it will never take less than 10 minutes, no matter how many trips you have to make or how many friends you get to help. That is why the process starts up fast, everybody gets into their cars and drives off to the store, but then nothing happens for a while because it still takes 10 minutes for everyone to do their job.
Same thing here. Your loop body takes X time to run. If you iterate through it Y times, it will take X * Y real world human time to run. If you run it in parallel Y times, ideally it will take just X time to run. Each parallel worker must still execute the full body of the loop taking X time.
In order to speed things up further, you have to break up the big bottleneck of very_slow_subroutine and make that work in parallel. Your SQL is so simple that is where you should focus your efforts at optimization and parallelism.
Let's say the store is really close, it's only a 1 minute drive (this is your SQL UPDATE), but shopping, loading and unloading takes 9 minutes (this is very_slow_subroutine). What if instead you have 5 cars and 15 friends. You load 3 people into each car. Driving to and from the store will take the same time, but now three people are working together to do the shopping, loading and unloading taking only 4 minutes. Now each trip takes 5 minutes instead of 10.
This represents redesigning very_slow_subroutine to do its work in parallel. If it's just a big loop, you can put more workers on that loop. If it's a series of slow operations, you will have to redesign it to take advantage of parallel execution.
If you use too many workers you can clog up the system, it depends on what the bottleneck is. If it's CPU bound and you have 2 CPU cores, you're probably see performance gains up to 3 to 5 workers ((cores * 2)+1 is a good rule of thumb) and after that performance will drop off as the CPU spends more time switching between processes than doing work. If the bottleneck is IO, or an external service as is often the case with database and network calls, you can see great efficiencies throwing many workers at the problem. While one process is waiting around for a disk or network operation, the others can be using your CPU.
Whether parallelism can help depends on where your bottleneck is. If your CPU with 4 cores is the bottleneck, forking 4 processes might cause things to complete in about 1/4th the under the best case scenario, but spawning 15 processes is not going to improve things much more.
If, more likely, your bottleneck is in I/O, starting 15 processes that compete for the same I/O is not going to help much, although in cases where you have tons of memory to use as file cache, some improvement might be possible.
To explore the limits on your system, consider the following program:
#!/usr/bin/env perl
use strict;
use warnings;
use Parallel::ForkManager;
run(#ARGV);
sub run {
my $count = #_ ? $_[0] : 2;
my $pm = Parallel::ForkManager->new($count);
for (1 .. 20) {
$pm->start and next;
sleep 1;
$pm->finish;
}
$pm->wait_all_children;
}
My ancient laptop has a single CPU with 2 cores. Let's see what I get:
TimeThis : Command Line : perl sleeper.pl 1
TimeThis : Elapsed Time : 00:00:20.735
TimeThis : Command Line : perl sleeper.pl 2
TimeThis : Elapsed Time : 00:00:06.578
TimeThis : Command Line : perl sleeper.pl 4
TimeThis : Elapsed Time : 00:00:04.578
TimeThis : Command Line : perl sleeper.pl 8
TimeThis : Elapsed Time : 00:00:03.546
TimeThis : Command Line : perl sleeper.pl 16
TimeThis : Elapsed Time : 00:00:02.562
TimeThis : Command Line : perl sleeper.pl 20
TimeThis : Elapsed Time : 00:00:02.563
So, running with max 20 processes gives me a total run time over 2.5 seconds for sleeping one second 20 times.
On the other hand, with just one process, sleeping one second 20 times took just over 20 seconds. That is a huge improvement, but it also indicates a management overhead of more than 150% when you have 20 processes each sleeping for one second.
This is in the nature of parallel programming. There are a lot of formal treatments out there on what you can expect, but Amdahl's Law is required reading.
curious question on timing. When measuring wall clock time with any language such as python time.time() does the time include the CPU/System time.clock() time in it as well?
In Python, time.time() gives you the elapsed time (also known as wall time).That includes CPU time inasmuch as that's a subset of wall time but you cannot extract CPU time from time.time() itself.
For example, if your process runs for ten seconds but uses the CPU for only five of those seconds, the former includes the latter.
There are only two hard things in Computer Science: cache invalidation
and naming things.
-- Phil Karlton
My app is reporting CPU time, and people reasonably want to know how much time this is out of, so they can compute % CPU utilized. My question is, what's the name for the wall clock time times the number of CPUs?
If you add up the total user, system, idle, etc. time for a system, you get the total wall clock time, times the number of CPUs. What's a good name for that? According to Wikipedia, CPU time is:
CPU time (or CPU usage, process time) is the amount of time for which
a central processing unit (CPU) was used for processing instructions
of a computer program, as opposed to, for example, waiting for
input/output (I/O) operations.
"total time" suggests just wall clock time, and doesn't connote that over a 10 second span, a four-cpu system would have 40 seconds of "total time."
Total Wall Clock Time of all CPUs
Naming things is hard, why waste a good 'un once you've found it ?
Aggregate time: 15 of 40 seconds.