Did anyone come across using kdb’s matrix function? I found it quite slow compared to other tools.
For inverse matrix function inv on a 1000 by 1000 float matrix , It took kdb+ 166,682 milliseconds for 10 executions. For matrix multiplication, it took 3,380 milliseconds, where the two matrices share the same dimension as 1000 by 1000.
I also tried the same experiment on DolphinDB, a similar time series database with built-in analytics features (DolphinDB ). DolphinDB’s inv function is about 17 times faster and matrix multiplication is about 6 times faster.
I would think matrix operation optimization should be something well established. Can any kdb expert explain the reason behind it or if there are any ways to improve it?
I used KDB+ 4.0 64 bit version and DolphinDB_Linux_V2.00.7(DolphinDB community version: 2 cores and 8GB memory). Both experiments are conducted using 2 cores of CPU.
KDB implementation
// Start the server
rlwrap -r taskset -c 0,1 ./l64/q -p 5002 -s 2
// The code
ma:1000 cut (-5.0+ 1000000?10.0)
mb:1000 cut (-5.0+ 1000000?10.0)
\t do[10;inv ma]
16682
\t do[10;ma mmu mb]
3380
DolphinDB implementation
// Start the server
rlwrap -r ./dolphindb -localSite localhost:5002:local5002 -localExecutors 1
// The code
ma=(-5.0+ rand(10.0,1000000))$1000:1000;
mb=(-5.0+ rand(10.0,1000000))$1000:1000;
timer(10) ma.inv();
Time elapsed: 975.349 ms
timer(10) ma**mb;
Time elapsed: 581.618 ms
DolphinDB uses openblas for matrix operations like inv while kdb+ doesn't.
Nobody here is privy to what kdb and dolphin are doing under the covers for these operations, other than knowing that inv in kdb uses LU decomposition (previously it used Cholesky decomposition as well).
In general kdb is not optimised for matrix operations, it is optimised for vector operations. Case in point - if you have enough worker threads and use peach:
q)\t do[10;ma mmu mb]
2182
q)\t do[10;mmu[;mb]peach ma]
375
If you wanted to do serious matrix operations in kdb you would load C libraries to carry out the heavy lifting. For example using the Q math library (http://althenia.net/qml):
q)\t inv ma
1588
q)\t .qml.minv ma
689
Related
One can see from this tutorial on the usage of Intel MKL DFTs that Dr. Andrey E. Vladimirov uses the time elapsed during a task, namely t1-t0, to compute the number of GigaFLOPS using GF/s = HztoPerf/(t1-t0) where HztoPerf = 5.0 * 1e-9 * double(fft_size) * log2(double(fft_size)) * double(num_fft).
Is this a general formula? If not, how do I deduce the average GF/s for my CPU (Intel Xeon E5-1660 at 3 GHz with 8 cores) if I know the time elapsed to run a computation (e.g. involving various FFTs)?
You have to know how many FP operations your problem requires. Then you divide that by time.
1e-9 accounts for the Giga = 10^9 metric prefix. Without that, you'd have FLOP/s not GFLOP/s if you divide FLoating point OPeration count by seconds.
5.0 * fft_size * log2(fft_size) appears to be the number of FP ops per FFT.
An efficient FFT is O(n log2(n)), and apparently this implementation has a constant factor of 5. (Or possibly that's including some work done using the result?)
num_fft is presumably the total number of FFTs of that size done, i.e. the repeat count. So the product of all those things is the number of FP ops actually done during computation of the FFT.
Hardware performance counters on Intel CPUs can record number of FLOPs (even counting FMAs as 2): there are events like fp_arith_inst_retired.256b_packed_double for various SIMD widths.
perf has a GFLOPs "metric group" you can use that enables the relevant events and calculates it for you:
perf stat --all-user -M GFLOPs ./my_program my args
Counting only in user-space is probably redundant; kernel code might use SIMD for software RAID5/6, but not in interrupt handlers and probably not system calls. And not FP math.
Example on my i7-6700k Skylake
$ perf stat --all-user -M GFLOPs awk 'BEGIN{for(i=0;i<100000000;i++){}}'
Performance counter stats for 'awk BEGIN{for(i=0;i<100000000;i++){}}':
0 fp_arith_inst_retired.256b_packed_single # 0.03 GFLOPs (66.58%)
99,934,901 fp_arith_inst_retired.scalar_double (66.68%)
0 fp_arith_inst_retired.128b_packed_single (66.71%)
0 fp_arith_inst_retired.scalar_single (66.71%)
0 fp_arith_inst_retired.256b_packed_double (66.71%)
0 fp_arith_inst_retired.128b_packed_double (66.62%)
3,352,766,500 ns duration_time
3.352766500 seconds time elapsed
3.347268000 seconds user
0.000000000 seconds sys
Unfortunately it had to multiplex between those events, since there were more than 4, and hyperthreading is enabled, so the total number of scalar double-precision FP operations (99,934,901) was measured a bit lower than the awk loop iteration count. With just -e task_clock,cycles,instructions,fp_arith_inst_retired.scalar_double, it came out at exactly 100,000,000 counts, since apparently gawk did no other FP operations.
Of course awk is not a high-FP-throughput program, and only used scalar FP math. Numeric variables in awk are double-precision, like JavaScript, but unlike JS it doesn't JIT, let alone take advantage of the ability to do them as integer.
I just have a simple question, a bit silly, but I just need some clarification for an upcoming exam so I don't make a stupid mistake. I am currently taking a class in computer organization and design and am learning about execution time, CPI, clock cycles, etc.
For a problem, I have to calculate the amount of cycles for 2 compilers and find out which one is faster and by how much given the number of instructions and the cycles for each instruction. My main problem is figuring how much faster the faster compiler is.
For example lets say their are two compilers:
Compiler 1 has 3 load instructions, 4 store instructions, and 5 add
instructions.
Compiler 2 has 5 load instructions, 4 store instructions, and 3 add
instructions
A load instruction takes 2 cycles, a store instruction takes 3 cycles and a add instruction takes 1 cycle
So what I would do this add up to the instructions (3+4+5) and (5+4+3) which both equal to 12 instructions.
I'd then calculate the cycles by multiplying the number of instructions by the cycles and adding them all together like this
Compiler 1: (3*2)+(4*3)+(5*1) = 23 cycles
Compiler 2: (5*2)+(4*3)+(3*1) = 25 cycles
So obviously compiler 1 is faster because it requires less cycles. To find out how much faster compiler 1 is against compiler 2 would I just divide the ratio of the cycles?
My calculation was 23/25 = 0.92, so compiler 1 is 0.92 times faster than compiler 2 (92% faster).
A classmate of mine was discussing this with me and claims that it would be 25/23 which would mean it is 1.08 times faster.
I know I can also calculate this by dividing the cycles by the instructions like:
23 cycles/12 instructions = 1.91
25 cycles/12 instructions = 2.08
and then 1.91/2.08 = 0.92 which is the same as the above answer.
I'm not sure which way would be correct.
I was also wondering if the amount of instructions are difference for the second compiler, let's say 15 instructions. Would calculating the ratio of the cycles be sufficient enough?
Or would I have to divide the cycles with the instructions (cycles/instructions) but put 15 instructions for both?
(ex. 23/15 and 25/15?) and then divide the quotients of both to get
the times faster? I also get the same number(0.92) in that case.
Thank you for any clarification.
The first compiler would be 1.08 times the speed of the second compiler, which is 8% faster (because 1.0 + 0.08 = 1.08).
Probably both calculations are innacurate, with modern/multi-core processors a compiler that generates more instruction may actually produce faster code.
I have some performance problems with parallel computing in Julia. I am new in both, Julia and parallel calculations.
In order to learn, I parallelized a code that should benefits from parallelization, but it does not.
The program estimates the mean of the mean of the components of arrays whose elements were chosen randomly with an uniform distribution.
Serial version
tic()
function mean_estimate(N::Int)
iter = 100000*2
p = 5000
vec_mean = zeros(iter)
for i = 1:iter
vec_mean[i] = mean( rand(p) )
end
return mean(vec_mean)
end
a = mean_estimate(0)
toc()
println("The mean is: ", a)
Parallelized version
addprocs(CPU_CORES - 1)
println("CPU cores ", CPU_CORES)
tic()
#everywhere function mean_estimate(N::Int)
iter = 100000
p = 5000
vec_mean = zeros(iter)
for i = 1:iter
vec_mean[i] = mean( rand(p) )
end
return mean(vec_mean)
end
the_mean = mean(vcat(pmap(mean_estimate,[1,2])...))
toc()
println("The mean is: ", the_mean)
Notes:
The factor 2 in the fourth line of the serial code is because I tried the code in a PC with two cores.
I checked the usage of the two cores with htop, and it seems to be ok.
The outputs I get are:
me#pentium-ws:~/average$ time julia serial.jl
elapsed time: 2.68671022 seconds
The mean is: 0.49999736055814215
real 0m2.961s
user 0m2.928s
sys 0m0.116s
and
me#pentium-ws:~/average$ time julia -p 2 parallel.jl
CPU cores 2
elapsed time: 2.890163089 seconds
The mean is: 0.5000104221069994
real 0m7.576s
user 0m11.744s
sys 0m0.308s
I've noticed that the serial version is slightly faster than the parallelized one for the timed part of the code. Also, that there is large difference in the total execution time.
Questions
Why is the parallelized version slower? (what I am doing wrong?)
Which is the right way to parallelize this program?
Note: I use pmap with vcat because I wish to try with the median too.
Thanks for your help
EDIT
I measured times as #HighPerformanceMark suggested. The tic()/toc() times are the following. The iteration number is 2E6 for every case.
Array Size Single thread Parallel Ratio
5000 2.69 2.89 1.07
100 000 488.77 346.00 0.71
1000 000 4776.58 4438.09 0.93
I am puzzled about why there is not clear trend with array size.
You should pay prime attention to suggestions in the comments.
As #ChrisRackauckas points out, type instability is a common stumbling block for performant Julia code. If you want highly performant code, then make sure that your functions are type-stable. Consider annotating the return type of the function pmap and/or vcat, e.g. f(pids::Vector{Int}) = mean(vcat(pmap(mean_estimate, pids))) :: Float64 or something similar, since pmap does not strongly type its output. Another strategy is to roll your own parallel scheduler. You can use pmap source code as a springboard (see code here).
Furthermore, as #AlexMorley commented, you are confounding your performance measurements by including compilation times. Normally performance of a function f() is measured in Julia by running it twice and measuring only the second run. In the first run, the JIT compiler compiles f() before running it, while the second run uses the compiled function. Compilation incurs a (unwanted) performance cost, so timing the second run avoid measuring the compilation.
If possible, preallocate all outputs. In your code, you have set each worker to allocate its own zeros(iter) and its own rand(p). This can have dramatic performance consequences. A sketch of your code:
# code mean_estimate as two functions
f(p::Int) = mean(rand(p))
function g(iter::Int, p::Int)
vec_mean = zeros(iter)
for i in eachindex(vec_mean)
vec_mean[i] = f(p)
end
return mean(vec_mean)
end
# run twice, time on second run to get compute time
g(200000, 5000)
#time g(200000, 5000)
### output on my machine
# 2.792953 seconds (600.01 k allocations: 7.470 GB, 24.65% gc time)
# 0.4999951853035917
The #time macro is alerting you that the garbage collector is cleaning up a lot of allocated memory during execution, several gigabytes in fact. This kills performance. Memory allocations may be overshadowing any distinction between your serial and parallel compute times.
Lastly, remember that parallel computing incurs overhead from scheduling and managing individual workers. Your workers are computing the mean of the means of many random vectors of length 5000. But you could succinctly compute the mean (or median) of, say, 5M entries with
x = rand(5_000_000)
mean(x)
#time mean(x) # 0.002854 seconds (5 allocations: 176 bytes)
so it is unclear how your parallel computing scheme improves upon serial performance. Parallel computing generally provides the best help when your arrays are truly beefy or your calculations are arithmetically intense, and vector means probably do not fall in that domain.
One last note: you may want to peek at SharedArrays, which distribute arrays over several workers with a common memory pool, or the experimental multithreading facilities in Julia. You may find those parallel frameworks more intuitive than pmap.
I am using tensorflow to build cnn net in image classification experiment,I found such phenomenon as:
operation 1:tf.nn.conv2d(x, [3,3,32,32], strides=[1,1,1,1], padding='SAME')
the shape of x is [128,128,32],means convolution using 3x3 kernel on x,both input channels and output channels are 32,the total multiply times is
3*3*32*32*128*128=150994944
operation 2:tf.nn.conv2d(x, [3,3,64,64], strides=[1,1,1,1], padding='SAME')
the shape of x is [64,64,64],means convolution using 3x3 kernel on x,both input channels and output channels are 64,the total multiply times is
3*3*64*64*64*64=150994944
In contrast with operation 1,the feature map size of operation 2 scale down to 1/2 and the channel number doubled. The multiply times are the same so the running time should be same.But in practice the running time of operation 1 is longer than operation 2.
My measure method was shown below
eliminate an convolution of operation 1,the training time for one epoch reduced 23 seconds,means the running time of operation 1 is 23 seconds.
eliminate an convolution of operation 2,the training time for one epoch reduced 13 seconds,means the running time of operation 2 is 13 seconds.
the phenomenon can reproduction every time。
My gpu is nvidia gtx980Ti,os is ubuntu 16.04。
So that comes the question: Why the running time of operation 1 was longer than operation 2?
If I had to guess it has to do with how the image is ordered in memory. Remember that in memory everything is stored in a flattened format. This means that if you have a tensor of shape [128, 128, 32], the 32 features/channels are stored next to eachover. Then all of the rows, then all of the columns. https://en.wikipedia.org/wiki/Row-major_order
Accessing closely packed memory is very important to performance especially on a GPU which has a large memory bus and is optimized for aligned in order memory access. In case with the larger image you have to skip around the image more and the memory access is more out of order. In case 2 you can do more in order memory access which gives you more speed. Multiplications are very fast operations. I bet with a convolution memory access if the bottleneck which limits performance.
chasep255's answer is good and probably correct.
Another possibility (or alternative way of thinking about chasep255's answer) is to consider how caching (all the little hardware tricks that can speed up memory fetches, address mapping, etc) could be producing what you see...
You have basically two things: a stream of X input data and a static filter matrix. In case 1, you have 9*1024 static elements, in case 2 you have 4 times as many. Both cases have the same total multiplication count, but in case 2 the process is finding more of its data where it expects (i.e. where it was last time it was asked for.) Net result: less memory access stalls, more speed.
I am calculating an equation A*x=B, where A is a matrix and B is a vector, x is answer (unknown) vector.
Hardware specs:
Intel i7 3630QM (4 cores),
nVidia GeForce GT 640M (384 CUDA cores)
Here's an example:
>> A=rand(5000);
>> B=rand(5000,1);
>> Agpu=gpuArray(A);
>> Bgpu=gpuArray(B);
>> tic;A\B;toc;
Elapsed time is 1.382281 seconds.
>> tic;Agpu\Bgpu;toc;
Elapsed time is 4.775395 seconds.
Somehow GPU is much slower... Why? It is also slower in FFT, INV, LU calculations, which should be related with matrix division.
However, GPU is much faster in matrix multiplication (the same data):
>> tic;A*B;toc;
Elapsed time is 0.014700 seconds.
>> tic;Agpu*Bgpu;toc;
Elapsed time is 0.000505 seconds.
The main question is why GPU A\B (mldivide) is so slow comparing to CPU?
UPDATED
Here are some more results when A, B (on CPU), AA, BB (on GPU) are rand(5000):
>> tic;fft(A);toc;
Elapsed time is *0.117189 *seconds.
>> tic;fft(AA);toc;
Elapsed time is 1.062969 seconds.
>> tic;fft(AA);toc;
Elapsed time is 0.542242 seconds.
>> tic;fft(AA);toc;
Elapsed time is *0.229773* seconds.
>> tic;fft(AA);toc;
Bold times are stable times. However GPU is almost twice slower. By the way, why GPU is even more slower on first two attempts? Is it compiled twice firstly?
In addition:
>> tic;sin(A);toc;
Elapsed time is *0.121008* seconds.
>> tic;sin(AA);toc;
Elapsed time is 0.020448 seconds.
>> tic;sin(AA);toc;
Elapsed time is 0.157209 seconds.
>> tic;sin(AA);toc;
Elapsed time is *0.000419 *seconds
After two calculations GPU is incredibly faster in sin calculations.
So, still, why GPU is so slow in matrix division, fft and similar calculations, though it is so fast in matrix multiplication and trigonometry? The question actually should not be like that... GPU should be faster in all these calculations because Matlab has released overlapped functions (mldivide, fft) for GPU.
Could somebody help me solve these issues, please? :)
Please read how Matlab calculates the solutions. It will help you understand why GPU is slower.
I'll try say it in few words.
A*x=b becomes L*(U*x=y)=b, L*U=A
So Matlab makes A to L*U (This process cannot be done fully parallel
as far as I know instead some steps can be done parallel, due to
their nature)
Then Matlab solves L*y=B and finds y. (This process cannot be done
parallel as each step requires data from previous)
Then Matlab solves U*x=y and finds x. (This process cannot be done
parallel as each step requires data from previous)
So it GPU clock is slower than the CPU, and since processes cannot be done parallel, CPU is faster. And no, unless you come up with a better method (good luck!) then GPU will be always slower except in some very specific cases.
Part 1 of the explanation is in the answer from user2230360, but your question is twofold, so I'll add a bit about the multiplication.
As noted already, the LU factorization is not very easily parallelized even if some steps can be. Matrix multiplication, however, is very much parallelizable. If you're working with these things you should be able to do matrix multiplication by hand, and then you will know that calculating the elements of the matrix C in A*B=C can be done in any order you want - hence the possibility for parallel computation. That is probably why you're seeing so lightning fast multiplication, but slow solving of linear systems. One can't be parallelized "as much as the other".