I have two blocks of code which consume 2 seconds each,
In a classic structure they run sequentially, in 4 seconds
In mpi format, it supposed to consume 2 seconds but it takes 5 seconds
WHY?
int main ( int argc, char *argv[] )
{
MPI_Init( &argc, &argv );
MPI_Comm_size(MPI_COMM_WORLD,&p );
MPI_Comm_rank(MPI_COMM_WORLD,&id);
if(id==0)
{
// 2 seconds Block
}
if(id==1)
{
// 2 seconds Block
}
MPI_Finalize();
}
What does take 5 seconds?
If you have measured the time for the whole program, the problem is that MPI_Init() and MPI_Finalize() are very time consuming.
In order to see speedup you could increase your "Blocks".
Related
This is my cuda code:
#include<stdio.h>
#include<stdint.h>
#include <chrono>
#include <cuda.h>
__global__ void test(int base, int* out)
{
int curTh = threadIdx.x+blockIdx.x*blockDim.x;
{
int tmp = base * curTh;
#pragma unroll
for (int i = 0; i<1000*1000*100; ++i) {
tmp *= tmp;
}
out[curTh] = tmp;
}
}
typedef std::chrono::high_resolution_clock Clock;
int main(int argc, char *argv[])
{
cudaStream_t stream;
cudaStreamCreateWithFlags(&stream, cudaStreamNonBlocking);
int data = rand();
int* d_out;
void* va_args[10] = {&data, &d_out};
int nth = 10;
if (argc > 1) {
nth = atoi(argv[1]);
}
int NTHREADS = 128;
printf("nth: %d\n", nth);
cudaMalloc(&d_out, nth*sizeof(int));
for (int i = 0; i < 10; ++i) {
auto start = Clock::now();
cudaLaunchKernel((const void*) test,
nth>NTHREADS ? nth/NTHREADS : 1,
nth>NTHREADS ? NTHREADS : nth, va_args, 0, stream);
cudaStreamSynchronize(stream);
printf("use :%ldms\n", (Clock::now()-start)/1000/1000);
}
cudaDeviceReset();
printf("host Hello World from CPU!\n");
return 0;
}
I compile my code, and run in 2080Ti, I found the thread elapse time is around 214 ms, but the thread count is 3 times of gpu core(in 2080Ti, it's 4352)
root#d114:~# ./cutest 1
nth: 1
use :255ms
use :214ms
use :214ms
use :214ms
use :214ms
use :214ms
use :214ms
use :214ms
use :214ms
use :214ms
root#d114:~# ./cutest 13056
nth: 13056
use :272ms
use :223ms
use :214ms
use :214ms
use :214ms
use :214ms
use :214ms
use :214ms
use :214ms
use :214ms
root#d114:~# ./cutest 21760
nth: 21760
use :472ms
use :424ms
use :424ms
use :424ms
use :424ms
use :424ms
use :424ms
use :424ms
use :424ms
use :428ms
So my question is Why is the elapse time the same as the number of thread increase to 3 times of gpu core?
It's mean the NVIDIA gpu computing power is 3 times of gpu core?
Even though gpu-pipeline can issue a new instruction at one per cycle rate, it can overlap multiple threads' instruction running at least 3-4 times for simple math operations so increased number of threads only adds few cycles of extra latency per thread. But as it is visible at thr=21760, giving more of same instruction fully fills the pipeline and starts waiting.
21760/13056=1.667
424ms/214ms=1.98
this difference of ratios could be originated from tail-effect. When pipelines are fully filled, adding small work doubles the latency because the new work is computed as a second wave of computation after only all others completed because all they have same exact instructions. You could add some more threads and it should stay at 424ms until you get a third wave of waiting threads because again the instructions are exactly same for all threads there is no branching between threads and they work like blocks of waiting from outside.
Loop iterating for 100million times with complete dependency chain is limiting the memory accesses too. Only 1 memory operation per 100m iterations will have too low bandwidth consumption on card's memory.
The kernel is neither compute nor memory bottlenecked (if you don't count the integer multiplication with no latency-hiding in its own thread as a computation). With this, all SM units of GPU must be running with nearly same timings (with some thread-launch latency that is not visible near 100m loop and is linearly increasing with more threads).
When the algorithm is a real-world one that uses multiple parts of pipeline (not just integer multiplication), SM unit can find more threads to overlap in the pipeline. For example, if SM unit supports 1024 threads per block (and if 2 blocks in-flight maximum) and if it has only 128 pipelines, then there has to be at least 2048/128 = 16 slots to overlap operations like reading main memory, floating-point multiplication/addition, reading constant cache, shuffling registers, etc and this lets it complete a task quicker.
My wasm code has a call to POSIX sleep(seconds) function. This call is done for limiting CPU consumption but I notice no difference with or without sleep, either with 1 or 1000 seconds.
My code initially had this structure
void myfunc(u32 *buff){
u32 size = 16;
while (1){
for (u32 i = 0; i < size; i++){
// do stuff
}
}
}
myfunc() si called by a Web Worker raising the CPU usage from 3% to 28% and when I terminate() the Web Worker the CPU drops down to 3%.
So I added a limiter to mitigate the CPU usage and keep it lower
#include <unistd.h>
void myfunc(u32 *buff){
u32 size = 16;
while (1){
sleep(1); // 1s or 1000s same behavior
for (u32 i = 0; i < size; i++){
// do stuff
}
}
}
but this change has no effect on CPU usage I only see that the sleep works and the thread is suspended for the time requested.
The for cycle takes a fraction of second so the time spent in sleeping is greater than the time spent in running.
I would add that when I do my tests I have no others CPU-intensive processes running hence I would expect a lower CPU usage when sleep(1000) for instance.
This only points that your environment uses a loop to implement the sleep function (you could probably verify that with a debugger).
If the stack-switching proposal was ready, merged, and implemented in your environment, probably some await for a promise would be used, but the stack-switching is not ready yet.
I am using perf in sampling mode to capture performance statistics of programs running on multi-core platform from NXP S32 platform running Linux 4.19.
E.g configuration
Core 0 - App0 , Core 1 - App1, Core 2 - App2
Without sampling i.e. at program level, App0 takes 6.9 seconds.
On sampling at 1Million cycles,App0 takes 6.3 sec
On sampling at 2Million cycles, App0 takes 6.4 sec
On sampling at 5Million cycles, App0 takes 6.5 sec
On sampling at 100Million cycles, App0 takes 6.8 sec.
As you can see with higher sampling period (100Million) App0 takes higher time to finish execution.
Actually I would have expected the opposite, i.e. sampling at 1Million cycles should have resulted in the program taking more time to execute due higher number of samples generated (perf overhead) as compared to 100 Million cycles?
I am unable to explain this behavior what do you think is causing this?
Any leads would be helpful.
P.S - On the Pi3B the behavior is as expected i.e sampling at 1million cycles results in longer execution time compared to 100 Million cycles.
UPDATE: I do not use perf from command line, instead make a perf system call using the perf event with the following flags in the struct perf_event_attr.
struct perf_event_attr hw_event;
pid_t pid = proccess_id; // measure the current process/thread
int cpu = -1; // measure on any cpu
unsigned long flags = 0;
int fd_current;
memset(&hw_event, 0, sizeof(struct perf_event_attr));
hw_event.type = event_type;
hw_event.size = sizeof(struct perf_event_attr);
hw_event.config = event;
if(group_fd == -1)
{
hw_event.sample_period = 2000000;
hw_event.sample_type = PERF_SAMPLE_READ;
hw_event.precise_ip = 1;
}
hw_event.disabled = 1; // off by default. specifies whether the counter starts out disabled or enabled.
hw_event.exclude_kernel = 0; // excluding events that happen in the kernel-space
hw_event.exclude_hv = 1; // excluding events that happen in the hypervisor
hw_event.pinned = pinned; // specifies the counter to be on the CPU if at all possible. applies only to hardware counters and only to group leaders.
hw_event.exclude_user = 0; // excludes events that happen in user space
hw_event.exclude_callchain_kernel = 0; // Do not include kernel callchains.
hw_event.exclude_callchain_user = 0; // Do not include user callchains.
hw_event.read_format = PERF_FORMAT_GROUP; // Allows all counter values in an event group to be read with one read
fd_current = syscall(__NR_perf_event_open, &hw_event, pid, cpu, group_fd, flags);
if (fd_current == -1) {
printf("Error opening leader %llx\n", hw_event.config);
exit(EXIT_FAILURE);
}
return fd_current;
I am benchmarking some R statements (see details here) and found that my elapsed time is way longer than my user time.
user system elapsed
7.910 7.750 53.916
Could someone help me to understand what factors (R or hardware) determine the difference between user time and elapsed time, and how I can improve it? In case it helps: I am running data.table data manipulation on a Macbook Air 1.7Ghz i5 with 4GB RAM.
Update: My crude understanding is that user time is what it takes my CPU to process my job. elapsed time is the length from I submit a job until I get the data back. What else did my computer need to do after processing for 8 seconds?
Update: as suggested in the comment, I run a couple times on two data.table: Y, with 104 columns (sorry, I add more columns as time goes by), and X as a subset of Y with only 3 keys. Below are the updates. Please note that I ran these two procedures consecutively, so the memory state should be similar.
X<- Y[, list(Year, MemberID, Month)]
system.time(
{X[ , Month:= -Month]
setkey(X,Year, MemberID, Month)
X[,Month:=-Month]}
)
user system elapsed
3.490 0.031 3.519
system.time(
{Y[ , Month:= -Month]
setkey(Y,Year, MemberID, Month)
Y[,Month:=-Month]}
)
user system elapsed
8.444 5.564 36.284
Here are the size of the only two objects in my workspace (commas added). :
object.size(X)
83,237,624 bytes
object.size(Y)
2,449,521,080 bytes
Thank you
User time is how many seconds the computer spent doing your calculations. System time is how much time the operating system spent responding to your program's requests. Elapsed time is the sum of those two, plus whatever "waiting around" your program and/or the OS had to do. It's important to note that these numbers are the aggregate of time spent. Your program might compute for 1 second, then wait on the OS for one second, then wait on disk for 3 seconds and repeat this cycle many times while it's running.
Based on the fact that your program took as much system time as user time it was a very IO intensive thing. Reading from disk a lot or writing to disk a lot. RAM is pretty fast, a few hundred nanoseconds usually. So if everything fits in RAM elapsed time is usually just a little bit longer than user time. But disk might take a few milliseconds to seek and even longer to reply with the data. That's slower by a factor of of a million.
We've determined that your processor was "doing stuff" for ~8 + ~8 = ~ 16 seconds. What was it doing for the other ~54 - ~16 = ~38 seconds? Waiting for the hard drive to send it the data it asked for.
UPDATE1:
Matthew had made some excellent points that I'm making assumptions that I probably shouldn't be making. Adam, if you'd care to publish a list of all the rows in your table (datatypes are all we need) we can get a better idea of what's going on.
I just cooked up a little do-nothing program to validate my assumption that time not spent in userspace and kernel space is likely spent waiting for IO.
#include <stdio.h>
int main()
{
int i;
for(i = 0; i < 1000000000; i++)
{
int j, k, l, m;
j = 10;
k = i;
l = j + k;
m = j + k - i + l;
}
return 0;
}
When I run the resulting program and time it I see something like this:
mike#computer:~$ time ./waste_user
real 0m4.670s
user 0m4.660s
sys 0m0.000s
mike#computer:~$
As you can see by inspection the program does no real work and as such it doesn't ask the kernel to do anything short of load it into RAM and start it running. So nearly ALL the "real" time is spent as "user" time.
Now a kernel-heavy do-nothing program (with a few less iterations to keep the time reasonable):
#include <stdio.h>
int main()
{
FILE * random;
random = fopen("/dev/urandom", "r");
int i;
for(i = 0; i < 10000000; i++)
{
fgetc(random);
}
return 0;
}
When I run that one, I see something more like this:
mike#computer:~$ time ./waste_sys
real 0m1.138s
user 0m0.090s
sys 0m1.040s
mike#computer:~$
Again it's easy to see by inspection that the program does little more than ask the kernel to give it random bytes. /dev/urandom is a non-blocking source of entropy. What does that mean? The kernel uses a pseudo-random number generator to quickly generate "random" values for our little test program. That means the kernel has to do some computation but it can return very quickly. So this program mostly waits for the kernel to compute for it, and we can see that reflected in the fact that almost all the time is spent on sys.
Now we're going to make one little change. Instead of reading from /dev/urandom which is non-blocking we'll read from /dev/random which is blocking. What does that mean? It doesn't do much computing but rather it waits around for stuff to happen on your computer that the kernel developers have empirically determined is random. (We'll also do far fewer iterations since this stuff takes much longer)
#include <stdio.h>
int main()
{
FILE * random;
random = fopen("/dev/random", "r");
int i;
for(i = 0; i < 100; i++)
{
fgetc(random);
}
return 0;
}
And when I run and time this version of the program, here's what I see:
mike#computer:~$ time ./waste_io
real 0m41.451s
user 0m0.000s
sys 0m0.000s
mike#computer:~$
It took 41 seconds to run, but immeasurably small amounts of time on user and real. Why is that? All the time was spent in the kernel, but not doing active computation. The kernel was just waiting for stuff to happen. Once enough entropy was collected the kernel would wake back up and send back the data to the program. (Note it might take much less or much more time to run on your computer depending on what all is going on). I argue that the difference in time between user+sys and real is IO.
So what does all this mean? It doesn't prove that my answer is right because there could be other explanations for why you're seeing the behavior that you are. But it does demonstrate the differences between user compute time, kernel compute time and what I'm claiming is time spent doing IO.
Here's my source for the difference between /dev/urandom and /dev/random:
http://en.wikipedia.org/wiki//dev/random
UPDATE2:
I thought I would try and address Matthew's suggestion that perhaps L2 cache misses are at the root of the problem. The Core i7 has a 64 byte cache line. I don't know how much you know about caches, so I'll provide some details. When you ask for a value from memory the CPU doesn't get just that one value, it gets all 64 bytes around it. That means if you're accessing memory in a very predictable pattern -- like say array[0], array[1], array[2], etc -- it takes a while to get value 0, but then 1, 2, 3, 4... are much faster. Until you get to the next cache line, that is. If this were an array of ints, 0 would be slow, 1..15 would be fast, 16 would be slow, 17..31 would be fast, etc.
http://software.intel.com/en-us/forums/topic/296674
In order to test this out I've made two programs. They both have an array of structs in them with 1024*1024 elements. In one case the struct has a single double in it, in the other it's got 8 doubles in it. A double is 8 bytes long so in the second program we're accessing memory in the worst possible fashion for a cache. The first will get to use the cache nicely.
#include <stdio.h>
#include <stdlib.h>
#define MANY_MEGS 1048576
typedef struct {
double a;
} PartialLine;
int main()
{
int i, j;
PartialLine* many_lines;
int total_bytes = MANY_MEGS * sizeof(PartialLine);
printf("Striding through %d total bytes, %d bytes at a time\n", total_bytes, sizeof(PartialLine));
many_lines = (PartialLine*) malloc(total_bytes);
PartialLine line;
double x;
for(i = 0; i < 300; i++)
{
for(j = 0; j < MANY_MEGS; j++)
{
line = many_lines[j];
x = line.a;
}
}
return 0;
}
When I run this program I see this output:
mike#computer:~$ time ./cache_hits
Striding through 8388608 total bytes, 8 bytes at a time
real 0m3.194s
user 0m3.140s
sys 0m0.016s
mike#computer:~$
Here's the program with the big structs, they each take up 64 bytes of memory, not 8.
#include <stdio.h>
#include <stdlib.h>
#define MANY_MEGS 1048576
typedef struct {
double a, b, c, d, e, f, g, h;
} WholeLine;
int main()
{
int i, j;
WholeLine* many_lines;
int total_bytes = MANY_MEGS * sizeof(WholeLine);
printf("Striding through %d total bytes, %d bytes at a time\n", total_bytes, sizeof(WholeLine));
many_lines = (WholeLine*) malloc(total_bytes);
WholeLine line;
double x;
for(i = 0; i < 300; i++)
{
for(j = 0; j < MANY_MEGS; j++)
{
line = many_lines[j];
x = line.a;
}
}
return 0;
}
And when I run it, I see this:
mike#computer:~$ time ./cache_misses
Striding through 67108864 total bytes, 64 bytes at a time
real 0m14.367s
user 0m14.245s
sys 0m0.088s
mike#computer:~$
The second program -- the one designed to have cache misses -- it took five times as long to run for the exact same number of memory accesses.
Also worth noting is that in both cases, all the time spent was spent in user, not sys. That means that the OS is counting the time your program has to wait for data against your program, not against the operating system. Given these two examples I think it's unlikely that cache misses are causing your elapsed time to be substantially longer than your user time.
UPDATE3:
I just saw your update that the really slimmed down table ran about 10x faster than the regular-sized one. That too would indicate to me that (as another Matthew also said) you're running out of RAM.
Once your program tries to use more memory than your computer actually has installed it starts swapping to disk. This is better than your program crashing, but its much slower than RAM and can cause substantial slowdowns.
I'll try and put together an example that shows swap problems tomorrow.
UPDATE4:
Okay, here's an example program which is very similar to the previous one. But now the struct is 4096 bytes, not 8 bytes. In total this program will use 2GB of memory rather than 64MB. I also change things up a bit and make sure that I access things randomly instead of element-by-element so that the kernel can't get smart and start anticipating my programs needs. The caches are driven by hardware (driven solely by simple heuristics) but it's entirely possible that kswapd (the kernel swap daemon) could be substantially smarter than the cache.
#include <stdio.h>
#include <stdlib.h>
typedef struct {
double numbers[512];
} WholePage;
int main()
{
int memory_ops = 1024*1024;
int total_memory = memory_ops / 2;
int num_chunks = 8;
int chunk_bytes = total_memory / num_chunks * sizeof(WholePage);
int i, j, k, l;
printf("Bouncing through %u MB, %d bytes at a time\n", chunk_bytes/1024*num_chunks/1024, sizeof(WholePage));
WholePage* many_pages[num_chunks];
for(i = 0; i < num_chunks; i++)
{
many_pages[i] = (WholePage*) malloc(chunk_bytes);
if(many_pages[i] == 0){ exit(1); }
}
WholePage* page_list;
WholePage* page;
double x;
for(i = 0; i < 300*memory_ops; i++)
{
j = rand() % num_chunks;
k = rand() % (total_memory / num_chunks);
l = rand() % 512;
page_list = many_pages[j];
page = page_list + k;
x = page->numbers[l];
}
return 0;
}
From the program I called cache_hits to cache_misses we saw the size of memory increased 8x and execution time increased 5x. What do you expect to see when we run this program? It uses 32x as much memory than cache_misses but has the same number of memory accesses.
mike#computer:~$ time ./page_misses
Bouncing through 2048 MB, 4096 bytes at a time
real 2m1.327s
user 1m56.483s
sys 0m0.588s
mike#computer:~$
It took 8x as long as cache_misses and 40x as long as cache_hits. And this is on a computer with 4GB of RAM. I used 50% of my RAM in this program versus 1.5% for cache_misses and 0.2% for cache_hits. It got substantially slower even though it wasn't using up ALL the RAM my computer has. It was enough to be significant.
I hope this is a decent primer on how to diagnose problems with programs running slow.
I'm looking for a way to occupy exactly 80% (or any other number) of a single CPU in a consistent manner.
I need this for some unit test that tests a component that triggers under specific CPU utilization conditions
For this purpose I can assume that the machine is otherwise idle.
What's a robust and possibly OS independent way to do this?
There is no such thing as occupying the CPU 80% of the time. The CPU is always either being used, or idle. Over some period of time, you can get average usage to be 80%. Is there a specific time period you want it to be averaged over? This pseudo-code should work across platforms and over 1 second have a CPU usage of 80%:
while True:
startTime = time.now()
while date.now() - startTime < 0.8:
Math.factorial(100) // Or any other computation here
time.sleep(0.2)
Its pretty easy to write a program that alternately spins and sleeps to get any particular load level you want. I threw this together in a couple of minutes:
#include <stdlib.h>
#include <signal.h>
#include <string.h>
#include <time.h>
#include <sys/time.h>
#define INTERVAL 500000
volatile sig_atomic_t flag;
void setflag(int sig) { flag = 1; }
int main(int ac, char **av) {
int load = 80;
struct sigaction sigact;
struct itimerval interval = { { 0, INTERVAL }, { 0, INTERVAL } };
struct timespec pausetime = { 0, 0 };
memset(&sigact, 0, sizeof(sigact));
sigact.sa_handler = setflag;
sigaction(SIGALRM, &sigact, 0);
setitimer(ITIMER_REAL, &interval, 0);
if (ac == 2) load = atoi(av[1]);
pausetime.tv_nsec = INTERVAL*(100 - load)*10;
while (1) {
flag = 0;
nanosleep(&pausetime, 0);
while (!flag) { /* spin */ } }
return 0;
}
The trick is that if you want to occupy 80% of CPU, keep the processor busy for 0.8 seconds (or 80% of any time interval. Here I take it to be 1 second), then let it sleep for 0.2 seconds, though it is advisable not to utilize the CPU too much, or all your processes will start running slow. You could try around 20% or so.
Here is an example done in Python:
import time
import math
time_of_run = 0.1
percent_cpu = 80 # Should ideally replace this with a smaller number e.g. 20
cpu_time_utilisation = float(percent_cpu)/100
on_time = time_of_run * cpu_time_utilisation
off_time = time_of_run * (1-cpu_time_utilisation)
while True:
start_time = time.clock()
while time.clock() - start_time < on_time:
math.factorial(100) #Do any computation here
time.sleep(off_time)