I'm working in an algorithm using OpenCL and I need to measure the execution time of it in its parallel and sequential versions. Due to this, I'm using an external loop to iterate both codes and measure their times but I have obtained:
Sequential: 3.06 segs
Parallel: 269 segs
The code that I'm using for the parallel version is:
t_start=clock(); /* Start measuring time */
for(i=0;i<=N; i++) // N is really big, around a million, but is the same for both versions
{
fitness = 0;
ret = clEnqueueNDRangeKernel(command_queue, kernel, 1, NULL, &global_item_size, NULL, 0, NULL, NULL);
ret = clEnqueueReadBuffer(command_queue, vdistance, CL_TRUE, 0, siz_mem_distance_code, distance_code, 0, NULL, NULL);
ret = clEnqueueReadBuffer(command_queue, vsumatorio, CL_TRUE, 0,siz_mem_sumatorio, sumatorio, 0, NULL, NULL);
fitness = (1/(*sumatorio)) + (*distance_code/12) + ((pow(*distance_code,2))/4) + ((pow(*distance_code,3))/6);
}
t_finish=clock(); /* End measuring time */
Before this piece of code, I have created/initialized all the things that we need to run a program using OpenCL ( platform, devide, context, queue, buffer, kernel,...) and after this code, I release everything.
I have checked that this increase of time is due to read in each iteration both variables ( distance_code and sumatorio) but I must to do it because I have to obtain the fitness value which is a sequential instruction and can only be excuted when the kernel has finished, so... Could you help me? What am I doing wrong?
I hope to have explained myself properly, thanks in advance.
Note: I'm only working with the CPU.
The overhead of launching so many kernels exceeds the benefits of parallelizing a for loop over only 64 data items. You need to rewrite your problem so that you launch relatively few kernels over large batches of data. In that case and if the OpenCL compiler generated appropriate vectorizing machine code you would see an improvement over the sequential version.
Additionally, you should check with either AMD's CodeXL or Intel's Offline Compiler if the generated code contains any vector instructions.
Related
I'm working in a program that it needs to get the current CPU Usage How i can achieve that in vb.Net
i tried like 4 codes but i still get 0% every time . here is one example of what i've used Link
Thanks In Advance ,
Anes08
Though it is not allowed to answer such questions,but still , here's something that might help you get started :
Dim cpu as New System.Diagnostics.PerformanceCounter
cpu.CategoryName = "Processor"
cpu.CounterName = "% Processor Time"
cpu.InstanceName = "_Total"
MessageBox(cpu.NextValue.ToString + "%")
If it doesn't work , here's a better version:
Dim cpu as PerformanceCounter '''Declare in class level
'On form load(actually you need to initialize it first)
cpu = new PerformanceCounter("Processor", "% Processor Time", "_Total")
'''Finally,get the value :
MsgBox(cpu.NextValue & "%") '''Use .ToString if required
you can use LblCpuUsage.text = CombinedAllCpuUsageOfEachThread.NextValue()
.There is a helper library to get that information:
The Performance Data Helper (see Using the PDH Functions to Consume
Counter Data (Windows)[^])
.
Microsoft examples are in C but there are also corresponding VB (not .Net) functions:
Performance Counters Functions for Visual Basic (Windows)[^]
For me I wanted an average. There were a couple problems getting CPU utilization that seemed like there should be an easy package to solve but I didn't see one.
The first is of course that a value of 0 on the first request is useless. Since you already know that the first response is 0, why doesn't the function just take that into account and return the true .NextValue()?
The second problem is that an instantaneous reading may be wildly inaccurate when trying to make decisions on what resources your app may have available to it since it could be spiking, or between spikes.
My solution was to do a for loop that cycles through and gives you an average for the past few seconds. you can adjust the counter to make it shorter or longer (as long as it is more than 2).
public static float ProcessorUtilization;
public static float GetAverageCPU()
{
PerformanceCounter cpuCounter = new PerformanceCounter("Process", "% Processor Time", Process.GetCurrentProcess().ProcessName);
for (int i = 0; i < 11; ++i)
{
ProcessorUtilization += (cpuCounter.NextValue() / Environment.ProcessorCount);
}
// Remember the first value is 0, so we don't want to average that in.
Console.Writeline(ProcessorUtilization / 10);
return ProcessorUtilization / 10;
}
I'm working with a legacy application on Linux that uses ClientMessage to do messaging between cooperating processes. The XClientMessageEvent structure provides a union as a convenience to use for custom data:
union {
char b[20];
short s[10];
long l[5];
} data;
And the man page claims: "The b, s, and l members represent data of twenty 8-bit values, ten 16-bit values, and five 32-bit values. "
This is all well and good if compiling on a 32-bit system. Just different access to the 20 bytes of the union, right? Well, on a 64-bit system, the "long l[5]" member takes up 40 bytes.
My testing shows that if the sender considers these as 64-bit longs and uses them, the receiver only gets the first 20 bytes of the "data" structure. The last 20 bytes are lost (appear as zeros to the receiver).
Since the application relies pretty heavily on packing this union with both shorts and longs, I'm stuck.
I'm using: xorg-x11-server-Xorg-1.15.0-22.el6.centos.x86_64
Has anyone had any experience in solving this? Am I just not understanding this correctly?
It sure seems like a bug in X to me. 64-bit X11 has been around for a long time. I've only seen one other mention of this issue on the web.
I've used a XClientMessage on 64 bit, this works great to go fullscreen:
XClientMessageEvent xevent;
memset (&xevent, 0, sizeof (xevent));
xevent.type = ClientMessage;
xevent.window = mw;
xevent.message_type = XInternAtom (m_display, "_NET_WM_FULLSCREEN_MONITORS", False);
xevent.format = 32;
xevent.data.l[0] = m_fullScreen;
xevent.data.l[1] = m_fullScreen;
xevent.data.l[2] = m_fullScreen;
xevent.data.l[3] = m_fullScreen;
xevent.data.l[4] = 1; // source indication
XSendEvent (m_display, DefaultRootWindow (m_display), False, SubstructureRedirectMask | SubstructureNotifyMask, (XEvent *) & xevent);
Did you set xevent.format as well? It's not documented but I guess it's used to hint as to which of the 3 members b, s or l are used.
I had a piece of code, which looked like this,
for(i=0;i<NumberOfSteps;i++)
{
for(k=0;k<NumOfNodes;k++)
{
mark[crawler[k]]++;
r = rand() % node_info[crawler[k]].num_of_nodes;
crawler[k] = (int)DataBlock[node_info[crawler[k]].index+r][0];
}
}
I changed it so that the load can be split among multiple threads. Now it looks like this,
for(i=0;i<NumberOfSteps;i++)
{
for(k=0;k<NumOfNodes;k++)
{
pthread_mutex_lock( &mutex1 );
mark[crawler[k]]++;
pthread_mutex_unlock( &mutex1 );
pthread_mutex_lock( &mutex1 );
r = rand() % node_info[crawler[k]].num_of_nodes;
pthread_mutex_unlock( &mutex1 );
pthread_mutex_lock( &mutex1 );
crawler[k] = (int)DataBlock[node_info[crawler[k]].index+r][0];
pthread_mutex_unlock( &mutex1 );
}
}
I need the mutexes to protect shared variables. It turns out that my parallel code is slower. But why ? Is it because of the mutexes ?
Could this possibly be something to do with the cacheline size ?
You are not parallelizing anything but the loop heads. Everything between lock and unlock is forced to be executed sequentially. And since lock/unlock are (potentially) expensive operations, the code is getting slower.
To fix this, you should at least separate expensive computations (without mutex protection) from access to shared data areas (with mutexes). Then try to move the mutexes out of the inner loop.
You could use atomic increment instructions (depends on platform) instead of plain '++', which is generally cheaper than mutexes. But beware of doing this often on data of a single cache line from different threads in parallel (see 'false sharing').
AFAICS, you could rewrite the algorithm as indicated below with out needing mutexes and atomic increment at all. getFirstK() is NumOfNodes/NumOfThreads*t if NumOfNodes is an integral multiple of NumOfThreads.
for(t=0;t<NumberOfThreads;t++)
{
kbegin = getFirstK(NumOfNodes, NumOfThreads, t);
kend = getFirstK(NumOfNodes, NumOfThreads, t+1);
// start the following in a separate thread with kbegin and kend
// copied to thread local vars kbegin_ and kend_
int k, i, r;
unsigned state = kend_; // really bad seed
for(k=kbegin_;k<kend_;k++)
{
for(i=0;i<NumberOfSteps;i++)
{
mark[crawler[k]]++;
r = rand_r(&state) % node_info[crawler[k]].num_of_nodes;
crawler[k] = (int)DataBlock[node_info[crawler[k]].index+r][0];
}
}
}
// wait for threads/jobs to complete
This way to generate random numbers may lead to bad random distributions, see this question for details.
What would be the best and most accurate way to determine how long it took to process a routine, such as a procedure of function?
I ask because I am currently trying to optimize a few functions in my Application, when i test the changes it is hard to determine just by looking at it if there was any improvements at all. So if I could return an accurate or near accurate time it took to process a routine, I then have a more clear idea of how well, if any changes to the code have been made.
I considered using GetTickCount, but I am unsure if this would be anything near accurate?
It would be useful to have a resuable function/procedure to calculate the time of a routine, and use it something like this:
// < prepare for calcuation of code
...
ExecuteSomeCode; // < code to test
...
// < stop calcuating code and return time it took to process
I look forward to hearing some suggestions.
Thanks.
Craig.
From my knowledge, the most accurate method is by using QueryPerformanceFrequency:
code:
var
Freq, StartCount, StopCount: Int64;
TimingSeconds: real;
begin
QueryPerformanceFrequency(Freq);
QueryPerformanceCounter(StartCount);
// Execute process that you want to time: ...
QueryPerformanceCounter(StopCount);
TimingSeconds := (StopCount - StartCount) / Freq;
// Display timing: ...
end;
Try Eric Grange's Sampling Profiler.
From Delphi 6 upwards you can use the x86 Timestamp counter.
This counts CPU cycles, on a 1 Ghz processor, each count takes one nanosecond.
Can't get more accurate than that.
function RDTSC: Int64; assembler;
asm
// RDTSC can be executed out of order, so the pipeline needs to be flushed
// to prevent RDTSC from executing before your code is finished.
// Flush the pipeline
XOR eax, eax
PUSH EBX
CPUID
POP EBX
RDTSC //Get the CPU's time stamp counter.
end;
On x64 the following code is more accurate, because it does not suffer from the delay of CPUID.
rdtscp // On x64 we can use the serializing version of RDTSC
push rbx // Serialize the code after, to avoid OoO sneaking in
push rax // subsequent instructions prior to executing RDTSCP.
push rdx // See: http://www.intel.de/content/dam/www/public/us/en/documents/white-papers/ia-32-ia-64-benchmark-code-execution-paper.pdf
xor eax,eax
cpuid
pop rdx
pop rax
pop rbx
shl rdx,32
or rax,rdx
Use the above code to get the timestamp before and after executing your code.
Most accurate method possible and easy as pie.
Note that you need to run a test at least 10 times to get a good result, on the first pass the cache will be cold, and random harddisk reads and interrupts can throw off your timings.
Because this thing is so accurate it can give you the wrong idea if you only time the first run.
Why you should not use QueryPerformanceCounter()
QueryPerformanceCounter() gives the same amount of time if the CPU slows down, it compensates for CPU thottling. Whilst RDTSC will give you the same amount of cycles if your CPU slows down due to overheating or whatnot.
So if your CPU starts running hot and needs to throttle down, QueryPerformanceCounter() will say that your routine is taking more time (which is misleading) and RDTSC will say that it takes the same amount of cycles (which is accurate).
This is what you want because you're interested in the amount of CPU-cycles your code uses, not the wall-clock time.
From the lastest intel docs: http://software.intel.com/en-us/articles/measure-code-sections-using-the-enhanced-timer/?wapkw=%28rdtsc%29
Using the Processor Clocks
This timer is very accurate. On a system with a 3GHz processor, this timer can measure events that last less than one nanosecond. [...] If the frequency changes while the targeted code is running, the final reading will be redundant since the initial and final readings were not taken using the same clock frequency. The number of clock ticks that occurred during this time will be accurate, but the elapsed time will be an unknown.
When not to use RDTSC
RDTSC is useful for basic timing. If you're timing multithreaded code on a single CPU machine, RDTSC will work fine. If you have multiple CPU's the startcount may come from one CPU and the endcount from another.
So don't use RDTSC to time multithreaded code on a multi-CPU machine. On a single CPU machine it works fine, or single threaded code on a multi-CPU machine it is also fine.
Also remember that RDTSC counts CPU cycles. If there is something that takes time but doesn't use the CPU, like disk-IO or network than RDTSC is not a good tool.
But the documentation says RDTSC is not accurate on modern CPU's
RDTSC is not a tool for keeping track of time, it's a tool for keeping track of CPU-cycles.
For that it is the only tool that is accurate. Routines that keep track of time are not accurate on modern CPU's because the CPU-clock is not absolute like it used to be.
You didn't specify your Delphi version, but Delphi XE has a TStopWatch declared in unit Diagnostics. This will allow you to measure the runtime with reasonable precision.
uses
Diagnostics;
var
sw: TStopWatch;
begin
sw := TStopWatch.StartNew;
<dosomething>
Writeln(Format('runtime: %d ms', [sw.ElapsedMilliseconds]));
end;
I ask because I am currently trying to
optimize a few functions
It is natural to think that measuring is how you find out what to optimize, but there's a better way.
If something takes a large enough fraction of time (F) to be worth optimizing, then if you simply pause it at random, F is the probability you will catch it in the act.
Do that several times, and you will see precisely why it's doing it, down to the exact lines of code.
More on that.
Here's an example.
Fix it, and then do an overall measurement to see how much you saved, which should be about F.
Rinse and repeat.
Here are some procedures I made to handle checking the duration of a function. I stuck them in a unit I called uTesting and then just throw into the uses clause during my testing.
Declaration
Procedure TST_StartTiming(Index : Integer = 1);
//Starts the timer by storing now in Time
//Index is the index of the timer to use. 100 are available
Procedure TST_StopTiming(Index : Integer = 1;Display : Boolean = True; DisplaySM : Boolean = False);
//Stops the timer and stores the difference between time and now into time
//Displays the result if Display is true
//Index is the index of the timer to use. 100 are available
Procedure TST_ShowTime(Index : Integer = 1;Detail : Boolean = True; DisplaySM : Boolean = False);
//In a ShowMessage displays time
//Uses DateTimeToStr if Detail is false else it breaks it down (H,M,S,MS)
//Index is the index of the timer to use. 100 are available
variables declared
var
Time : array[1..100] of TDateTime;
Implementation
Procedure TST_StartTiming(Index : Integer = 1);
begin
Time[Index] := Now;
end;
Procedure TST_StopTiming(Index : Integer = 1;Display : Boolean = True; DisplaySM : Boolean = False);
begin
Time[Index] := Now - Time[Index];
if Display then TST_ShowTime;
end;
Procedure TST_ShowTime(Index : Integer = 1;Detail : Boolean = True; DisplaySM : Boolean = False);
var
H,M,S,MS : Word;
begin
if Detail then
begin
DecodeTime(Time[Index],H,M,S,MS);
if DisplaySM then
ShowMessage('Hour = ' + FloatToStr(H) + #13#10 +
'Min = ' + FloatToStr(M) + #13#10 +
'Sec = ' + FloatToStr(S) + #13#10 +
'MS = ' + FloatToStr(MS) + #13#10)
else
OutputDebugString(PChar('Hour = ' + FloatToStr(H) + #13#10 +
'Min = ' + FloatToStr(M) + #13#10 +
'Sec = ' + FloatToStr(S) + #13#10 +
'MS = ' + FloatToStr(MS) + #13#10));
end
else
ShowMessage(TimeToStr(Time[Index]));
OutputDebugString(Pchar(TimeToStr(Time[Index])));
end;
Use this http://delphi.about.com/od/windowsshellapi/a/delphi-high-performance-timer-tstopwatch.htm
clock_gettime() is the high solution, which is precise to nano seconds, you can also use rtdsc, which is precise to CPU cycle, and lastly you can simply use gettimeofday().
How can you measure the amount of time a function will take to execute?
This is a relatively short function and the execution time would probably be in the millisecond range.
This particular question relates to an embedded system, programmed in C or C++.
The best way to do that on an embedded system is to set an external hardware pin when you enter the function and clear it when you leave the function. This is done preferably with a little assembly instruction so you don't skew your results too much.
Edit: One of the benefits is that you can do it in your actual application and you don't need any special test code. External debug pins like that are (should be!) standard practice for every embedded system.
There are three potential solutions:
Hardware Solution:
Use a free output pin on the processor and hook an oscilloscope or logic analyzer to the pin. Initialize the pin to a low state, just before calling the function you want to measure, assert the pin to a high state and just after returning from the function, deassert the pin.
*io_pin = 1;
myfunc();
*io_pin = 0;
Bookworm solution:
If the function is fairly small, and you can manage the disassembled code, you can crack open the processor architecture databook and count the cycles it will take the processor to execute every instructions. This will give you the number of cycles required.
Time = # cycles * Processor Clock Rate / Clock ticks per instructions
This is easier to do for smaller functions, or code written in assembler (for a PIC microcontroller for example)
Timestamp counter solution:
Some processors have a timestamp counter which increments at a rapid rate (every few processor clock ticks). Simply read the timestamp before and after the function.
This will give you the elapsed time, but beware that you might have to deal with the counter rollover.
Invoke it in a loop with a ton of invocations, then divide by the number of invocations to get the average time.
so:
// begin timing
for (int i = 0; i < 10000; i++) {
invokeFunction();
}
// end time
// divide by 10000 to get actual time.
if you're using linux, you can time a program's runtime by typing in the command line:
time [funtion_name]
if you run only the function in main() (assuming C++), the rest of the app's time should be negligible.
I repeat the function call a lot of times (millions) but also employ the following method to discount the loop overhead:
start = getTicks();
repeat n times {
myFunction();
myFunction();
}
lap = getTicks();
repeat n times {
myFunction();
}
finish = getTicks();
// overhead + function + function
elapsed1 = lap - start;
// overhead + function
elapsed2 = finish - lap;
// overhead + function + function - overhead - function = function
ntimes = elapsed1 - elapsed2;
once = ntimes / n; // Average time it took for one function call, sans loop overhead
Instead of calling function() twice in the first loop and once in the second loop, you could just call it once in the first loop and don't call it at all (i.e. empty loop) in the second, however the empty loop could be optimized out by the compiler, giving you negative timing results :)
start_time = timer
function()
exec_time = timer - start_time
Windows XP/NT Embedded or Windows CE/Mobile
You an use the QueryPerformanceCounter() to get the value of a VERY FAST counter before and after your function. Then you substract those 64-bits values and get a delta "ticks". Using QueryPerformanceCounterFrequency() you can convert the "delta ticks" to an actual time unit. You can refer to MSDN documentation about those WIN32 calls.
Other embedded systems
Without operating systems or with only basic OSes you will have to:
program one of the internal CPU timers to run and count freely.
configure it to generate an interrupt when the timer overflows, and in this interrupt routine increment a "carry" variable (this is so you can actually measure time longer than the resolution of the timer chosen).
before your function you save BOTH the "carry" value and the value of the CPU register holding the running ticks for the counting timer you configured.
same after your function
substract them to get a delta counter tick.
from there it is just a matter of knowing how long a tick means on your CPU/Hardware given the external clock and the de-multiplication you configured while setting up your timer. You multiply that "tick length" by the "delta ticks" you just got.
VERY IMPORTANT Do not forget to disable before and restore interrupts after getting those timer values (bot the carry and the register value) otherwise you risk saving incorrect values.
NOTES
This is very fast because it is only a few assembly instructions to disable interrupts, save two integer values and re-enable interrupts. The actual substraction and conversion to real time units occurs OUTSIDE the zone of time measurement, that is AFTER your function.
You may wish to put that code into a function to reuse that code all around but it may slow things a bit because of the function call and the pushing of all the registers to the stack, plus the parameters, then popping them again. In an embedded system this may be significant. It may be better then in C to use MACROS instead or write your own assembly routine saving/restoring only relevant registers.
Depends on your embedded platform and what type of timing you are looking for. For embedded Linux, there are several ways you can accomplish. If you wish to measure the amout of CPU time used by your function, you can do the following:
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
#define SEC_TO_NSEC(s) ((s) * 1000 * 1000 * 1000)
int work_function(int c) {
// do some work here
int i, j;
int foo = 0;
for (i = 0; i < 1000; i++) {
for (j = 0; j < 1000; j++) {
for ^= i + j;
}
}
}
int main(int argc, char *argv[]) {
struct timespec pre;
struct timespec post;
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &pre);
work_function(0);
clock_gettime(CLOCK_THREAD_CPUTIME_ID, &post);
printf("time %d\n",
(SEC_TO_NSEC(post.tv_sec) + post.tv_nsec) -
(SEC_TO_NSEC(pre.tv_sec) + pre.tv_nsec));
return 0;
}
You will need to link this with the realtime library, just use the following to compile your code:
gcc -o test test.c -lrt
You may also want to read the man page on clock_gettime there is some issues with running this code on SMP based system that could invalidate you testing. You could use something like sched_setaffinity() or the command line cpuset to force the code on only one core.
If you are looking to measure user and system time, then you could use the times(NULL) which returns something like a jiffies. Or you can change the parameter for clock_gettime() from CLOCK_THREAD_CPUTIME_ID to CLOCK_MONOTONIC...but be careful of wrap around with CLOCK_MONOTONIC.
For other platforms, you are on your own.
Drew
I always implement an interrupt driven ticker routine. This then updates a counter that counts the number of milliseconds since start up. This counter is then accessed with a GetTickCount() function.
Example:
#define TICK_INTERVAL 1 // milliseconds between ticker interrupts
static unsigned long tickCounter;
interrupt ticker (void)
{
tickCounter += TICK_INTERVAL;
...
}
unsigned in GetTickCount(void)
{
return tickCounter;
}
In your code you would time the code as follows:
int function(void)
{
unsigned long time = GetTickCount();
do something ...
printf("Time is %ld", GetTickCount() - ticks);
}
In OS X terminal (and probably Unix, too), use "time":
time python function.py
If the code is .Net, use the stopwatch class (.net 2.0+) NOT DateTime.Now. DateTime.Now isn't updated accurately enough and will give you crazy results
If you're looking for sub-millisecond resolution, try one of these timing methods. They'll all get you resolution in at least the tens or hundreds of microseconds:
If it's embedded Linux, look at Linux timers:
http://linux.die.net/man/3/clock_gettime
Embedded Java, look at nanoTime(), though I'm not sure this is in the embedded edition:
http://java.sun.com/j2se/1.5.0/docs/api/java/lang/System.html#nanoTime()
If you want to get at the hardware counters, try PAPI:
http://icl.cs.utk.edu/papi/
Otherwise you can always go to assembler. You could look at the PAPI source for your architecture if you need some help with this.