Execution time slow down at beginning and raising later - performance

This is execution time log
As you can see, it faster and faster until it used 1.5 sec for one iteraction
and then slower and slower
iter: 0/700000
loss:8.13768323263
speed: 4.878s / iter
iter: 1/700000
loss:4.69941059748
speed: 3.162s / iter
...
...
...
iter: 1560/700000
loss:2.16679636637
speed: 1.496s / iter
iter: 1561/700000
loss:2.9271744887
speed: 1.496s / iter
...
...
...
iter: 3698/700000
loss:1.47574504217
speed: 1.701s / iter
iter: 3699/700000
loss:1.75555475553
speed: 1.701s / iter
Using graph.finalize() to freeze graph
install tensorflow 1.0 from source, using jemalloc, build with XLA, SSE...etc
threads = tf.train.start_queue_runners(coord=coord, sess=sess)
sess.graph.finalize() # Graph is read-only after this statement.
and follow this github to implement image_reader and accumulate gradients (like iter_size in caffe), all OP are outside training loop
not sure if relevant
GPU memory grow slightly, from 5707 MiB to 5717MiB
GPU-util become low and strange
1% -> 59% -> 1% -> 99% -> 0% -> 54% -> 1% -> 48%
anyone met this situation before?
or have any suggestion to debug?

Related

High GC time for simple mapreduce problem

I have simulation program written in Julia that does something equivalent to this as a part of its main loop:
# Some fake data
M = [randn(100,100) for m=1:100, n=1:100]
W = randn(100,100)
work = zip(W,M)
result = mapreduce(x -> x[1]*x[2], +,work)
In other words, a simple sum of weighted matrices. Timing the above code yields
0.691084 seconds (79.03 k allocations: 1.493 GiB, 70.59% gc time, 2.79% compilation time)
I am surprised about the large number of memory allocations, as this problem should be possible to do in-place. To see if it was my use of mapreduce that was wrong I also tested the following equivalent implementation:
#time begin
res = zeros(100,100)
for m=1:100
for n=1:100
res += W[m,n] * M[m,n]
end
end
end
which gave
0.442521 seconds (50.00 k allocations: 1.491 GiB, 70.81% gc time)
So, if I wrote this in C++ or Fortran it would be simple to do all of this in-place. Is this impossible in Julia? Or am I missing something here...?
It is possible to do it in place like this:
function ws(W, M)
res = zeros(100,100)
for m=1:100
for n=1:100
#. res += W[m,n] * M[m, n]
end
end
return res
end
and the timing is:
julia> #time ws(W, M);
0.100328 seconds (2 allocations: 78.172 KiB)
Note that in order to perform this operation in-place I used broadcasting (I could also use loops, but it would be the same).
The problem with your code is that in line:
res += W[m,n] * M[m,n]
You get two allocations:
When you do multiplication W[m,n] * M[m,n] a new matrix is allocated.
When you do addition res += ... again a matrix is allocated
By using broadcasting with #. you perform an in-place operation, see https://docs.julialang.org/en/v1/manual/mathematical-operations/#man-dot-operators for more explanations.
Additionally note that I have wrapped the code inside a function. If you do not do it then access both W and M is type unstable which also causes allocations, see https://docs.julialang.org/en/v1/manual/performance-tips/#Avoid-global-variables.
I'd like to add something to Bogumił's answer. The missing broadcast is the main problem, but in addition, the loop and the mapreduce variant differ in a fundamental semantic way.
The purpose of mapreduce is to reduce by an associative operation with identity element init in an unspecified order. This in particular also includes the (theoretical) option of running parts in parallel and doesn't really play well with mutation. From the docs:
The associativity of the reduction is implementation-dependent. Additionally, some implementations may reuse the return value of f for elements that appear multiple times in itr. Use mapfoldl or
mapfoldr instead for guaranteed left or right associativity and invocation of f for every value.
and
It is unspecified whether init is used for non-empty collections.
What the loop variant really corresponds to is a fold, which has a well-defined order and initial (not necessarily identity) element and can thus use an in-place reduction operator:
Like reduce, but with guaranteed left associativity. If provided, the keyword argument init will be used exactly once.
julia> #benchmark foldl((acc, (m, w)) -> (#. acc += m * w), $work; init=$(zero(W)))
BenchmarkTools.Trial: 45 samples with 1 evaluation.
Range (min … max): 109.967 ms … 118.251 ms ┊ GC (min … max): 0.00% … 0.00%
Time (median): 112.639 ms ┊ GC (median): 0.00%
Time (mean ± σ): 112.862 ms ± 1.154 ms ┊ GC (mean ± σ): 0.00% ± 0.00%
▄▃█ ▁▄▃
▄▁▁▁▁▁▁▁▁▁▁▁▁▁▁▄███▆███▄▁▄▁▁▄▁▁▄▁▁▁▁▁▄▁▁▄▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▄ ▁
110 ms Histogram: frequency by time 118 ms <
Memory estimate: 0 bytes, allocs estimate: 0.
julia> #benchmark mapreduce(Base.splat(*), +, $work)
BenchmarkTools.Trial: 12 samples with 1 evaluation.
Range (min … max): 403.100 ms … 458.882 ms ┊ GC (min … max): 4.53% … 3.89%
Time (median): 445.058 ms ┊ GC (median): 4.04%
Time (mean ± σ): 440.042 ms ± 16.792 ms ┊ GC (mean ± σ): 4.21% ± 0.92%
▁ ▁ ▁ ▁ ▁ ▁ ▁▁▁ █ ▁
█▁▁▁▁▁▁▁▁▁▁▁█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█▁▁▁█▁▁▁▁▁▁█▁█▁▁▁▁███▁▁▁▁▁█▁▁▁█ ▁
403 ms Histogram: frequency by time 459 ms <
Memory estimate: 1.49 GiB, allocs estimate: 39998.
Think of it that way: if you would write the function as a parallel for loop with (+) reduction, iteration also would have an unspecified order, and you'd have memory overhead for the necessary copying of the individual results to the accumulating thread.
Thus, there is a trade-off. In your example, allocation/copying dominates. In other cases, the the mapped operation might dominate, and parallel reduction (with unspecified order, but copying overhead) be worth it.

deeplearning and deepwater models give very different logloss (0.4 vs 0.6)

In AWS, I followed the instruction in here and launched a g2.2xlarge EC2 using the community AMI ami-97591381 (h2o version: 3.13.0.356).
This is my code, which you can run as I made the S3 links public:
library(h2o)
library(jsonlite)
library(curl)
localH2O = h2o.init()
df.truth <- h2o.importFile("https://s3.amazonaws.com/nw.data.test.us.east/df.truth.zeroed", header = T, sep=",")
df.truth$isFemale <- h2o.asfactor(df.truth$isFemale)
hotnames.truth <- fromJSON("https://s3.amazonaws.com/nw.data.test.us.east/hotnames.json", simplifyVector = T)
# Training and validation sets
splits <- h2o.splitFrame(df.truth, c(0.9), seed=1234)
train.truth <- h2o.assign(splits[[1]], "train.truth.hex")
valid.truth <- h2o.assign(splits[[2]], "valid.truth.hex")
# Train a model using non-GPU deeplearning
dl.2 <- h2o.deeplearning(
training_frame = train.truth, model_id="dl.2",
validation_frame = valid.truth,
x=setdiff(hotnames.truth[1:(length(hotnames.truth)/2)], c("isFemale", "nwtcs")),
y="isFemale", stopping_metric = "AUTO", seed = 1,
sparse = F, mini_batch_size = 20)
# Train a model using GPU-enabled deepwater
dw.2 <- h2o.deepwater(
training_frame = train.truth, model_id="dw.2",
validation_frame = valid.truth,
x=setdiff(hotnames.truth[1:(length(hotnames.truth)/2)], c("isFemale", "nwtcs")),
y="isFemale", stopping_metric = "AUTO", seed = 1,
sparse = F, mini_batch_size = 20)
When I inspect the two models, to my surprise I saw large difference in logloss:
Non-GPU
print(dl.2)
Model Details:
==============
H2OBinomialModel: deeplearning
Model ID: dl.2
Status of Neuron Layers: predicting isFemale, 2-class classification, bernoulli distribution, CrossEntropy loss, 160,802 weights/biases, 2.0 MB, 1,041,465 training samples, mini-batch size 1
layer units type dropout l1 l2 mean_rate rate_rms momentum
1 1 600 Input 0.00 %
2 2 200 Rectifier 0.00 % 0.000000 0.000000 0.104435 0.102760 0.000000
3 3 200 Rectifier 0.00 % 0.000000 0.000000 0.031395 0.055490 0.000000
4 4 2 Softmax 0.000000 0.000000 0.001541 0.001438 0.000000
mean_weight weight_rms mean_bias bias_rms
1
2 0.018904 0.144034 0.150630 0.415525
3 -0.023333 0.081914 0.545394 0.251275
4 0.029091 0.295439 -0.004396 0.357609
H2OBinomialMetrics: deeplearning
** Reported on training data. **
** Metrics reported on temporary training frame with 9877 samples **
MSE: 0.1213733
RMSE: 0.3483868
LogLoss: 0.388214
Mean Per-Class Error: 0.2563669
AUC: 0.8433182
Gini: 0.6866365
Confusion Matrix (vertical: actual; across: predicted) for F1-optimal threshold:
0 1 Error Rate
0 6546 1079 0.141508 =1079/7625
1 836 1416 0.371226 =836/2252
Totals 7382 2495 0.193885 =1915/9877
H2OBinomialMetrics: deeplearning
** Reported on validation data. **
** Metrics reported on full validation frame **
MSE: 0.126671
RMSE: 0.3559087
LogLoss: 0.4005941
Mean Per-Class Error: 0.2585051
AUC: 0.8309913
Gini: 0.6619825
Confusion Matrix (vertical: actual; across: predicted) for F1-optimal threshold:
0 1 Error Rate
0 11746 3134 0.210618 =3134/14880
1 1323 2995 0.306392 =1323/4318
Totals 13069 6129 0.232160 =4457/19198
GPU-enabled
print(dw.2)
Model Details:
==============
H2OBinomialModel: deepwater
Model ID: dw.2b
Status of Deep Learning Model: MLP: [200, 200], 630.8 KB, predicting isFemale, 2-class classification, 1,708,160 training samples, mini-batch size 20
input_neurons rate momentum
1 600 0.000369 0.900000
H2OBinomialMetrics: deepwater
** Reported on training data. **
** Metrics reported on temporary training frame with 9877 samples **
MSE: 0.1615781
RMSE: 0.4019677
LogLoss: 0.629549
Mean Per-Class Error: 0.3467246
AUC: 0.7289561
Gini: 0.4579122
Confusion Matrix (vertical: actual; across: predicted) for F1-optimal threshold:
0 1 Error Rate
0 4843 2782 0.364852 =2782/7625
1 740 1512 0.328597 =740/2252
Totals 5583 4294 0.356586 =3522/9877
H2OBinomialMetrics: deepwater
** Reported on validation data. **
** Metrics reported on full validation frame **
MSE: 0.1651776
RMSE: 0.4064205
LogLoss: 0.6901861
Mean Per-Class Error: 0.3476629
AUC: 0.7187362
Gini: 0.4374724
Confusion Matrix (vertical: actual; across: predicted) for F1-optimal threshold:
0 1 Error Rate
0 8624 6256 0.420430 =6256/14880
1 1187 3131 0.274896 =1187/4318
Totals 9811 9387 0.387697 =7443/19198
As seen above, the difference in logloss is huge between non-GPU and GPU models:
Logloss
+----------------------------------+
| | non-GPU | GPU |
+----------------------------------+
| training data | 0.39 | 0.63 |
+----------------------------------|
| validation data | 0.40 | 0.69 |
+----------------------------------+
I understand that due to the stochastic nature of the training I will get different results, but I won't expect such a huge difference between non-GPU and GPU.
h2o.deeplearning is H2O's built-in deep-learning algorithm. It parallelizes very well, works well with large data, but does not use GPUs.
h2o.deepwater is a wrapper around (probably) Tensorflow, and (probably) using your GPU (but it can use the CPU, and it can use different back-ends).
In other words, this is not a difference in using the CPU or using the GPU: you are using two different implementations of deep learning.
BTW, I'd suggest you increase the number of epochs (from the default of 10, to something like 200 - bearing in mind this means it will take 20x longer to run), and see if the difference is still there. Or compare the score history charts, and see if Tensorflow is getting there, but just needs, say, 50% more epochs to get the same logloss score.

Debugging performance bottlenecks for longest common subsequence algorithm

I am writing a longest common subsequence algorithm in Haskell using vector library and state monad (to encapsulate the very imperative and mutable nature of Miller O(NP) algorithm). I have already written it in C for some project I needed it for, and am now writing it in Haskell as a way to explore how to write those imperative grid-walk algorithms with good performance that match C. The version I wrote with unboxed vectors is about 4-times slower than C version for same inputs (and compiled with right optimization flags - I used both system clock time and Criterion methods to validate the relative time measurements between Haskell and C versions, and same data types, both large and tiny inputs). I have been trying to figure out where the performance issues might be, and will appreciate feedback - it is possible there is some well-known performance issue I might have run into here, especially in vector library which I am heavily using here.
In my code, I have one function called gridWalk that is called most often, and also, does most of the work. The performance slowdown is likely to be there, but I can't figure out what it could be. Complete Haskell code is here. Snippets of the code below:
import Data.Vector.Unboxed.Mutable as MU
import Data.Vector.Unboxed as U hiding (mapM_)
import Control.Monad.ST as ST
import Control.Monad.Primitive (PrimState)
import Control.Monad (when)
import Data.STRef (newSTRef, modifySTRef, readSTRef)
import Data.Int
type MVI1 s = MVector (PrimState (ST s)) Int
cmp :: U.Vector Int32 -> U.Vector Int32 -> Int -> Int -> Int
cmp a b i j = go 0 i j
where
n = U.length a
m = U.length b
go !len !i !j| (i<n) && (j<m) && ((unsafeIndex a i) == (unsafeIndex b j)) = go (len+1) (i+1) (j+1)
| otherwise = len
-- function to find previous y on diagonal k for furthest point
findYP :: MVI1 s -> Int -> Int -> ST s (Int,Int)
findYP fp k offset = do
let k0 = k+offset-1
k1 = k+offset+1
y0 <- MU.unsafeRead fp k0 >>= \x -> return $ 1+x
y1 <- MU.unsafeRead fp k1
if y0 > y1 then return (k0,y0)
else return (k1,y1)
{-#INLINE findYP #-}
gridWalk :: Vector Int32 -> Vector Int32 -> MVI1 s -> Int -> (Vector Int32 -> Vector Int32 -> Int -> Int -> Int) -> ST s ()
gridWalk a b fp !k cmp = {-#SCC gridWalk #-} do
let !offset = 1+U.length a
(!kp,!yp) <- {-#SCC findYP #-} findYP fp k offset
let xp = yp-k
len = {-#SCC cmp #-} cmp a b xp yp
x = xp+len
y = yp+len
{-#SCC "updateFP" #-} MU.unsafeWrite fp (k+offset) y
return ()
{-#INLINE gridWalk #-}
-- The function below executes ct times, and updates furthest point as they are found during furthest point search
findSnakes :: Vector Int32 -> Vector Int32 -> MVI1 s -> Int -> Int -> (Vector Int32 -> Vector Int32 -> Int -> Int -> Int) -> (Int -> Int -> Int) -> ST s ()
findSnakes a b fp !k !ct cmp op = {-#SCC findSnakes #-} U.forM_ (U.fromList [0..ct-1]) (\x -> gridWalk a b fp (op k x) cmp)
{-#INLINE findSnakes #-}
I added some cost center annotations, and ran profiling with a certain LCS input for testing. Here is what I get:
total time = 2.39 secs (2394 ticks # 1000 us, 1 processor)
total alloc = 4,612,756,880 bytes (excludes profiling overheads)
COST CENTRE MODULE %time %alloc
gridWalk Main 67.5 52.7
findSnakes Main 23.2 27.8
cmp Main 4.2 0.0
findYP Main 3.5 19.4
updateFP Main 1.6 0.0
individual inherited
COST CENTRE MODULE no. entries %time %alloc %time %alloc
MAIN MAIN 64 0 0.0 0.0 100.0 100.0
main Main 129 0 0.0 0.0 0.0 0.0
CAF Main 127 0 0.0 0.0 100.0 100.0
findSnakes Main 141 0 0.0 0.0 0.0 0.0
main Main 128 1 0.0 0.0 100.0 100.0
findSnakes Main 138 0 0.0 0.0 0.0 0.0
gridWalk Main 139 0 0.0 0.0 0.0 0.0
cmp Main 140 0 0.0 0.0 0.0 0.0
while Main 132 4001 0.1 0.0 100.0 100.0
findSnakes Main 133 12000 23.2 27.8 99.9 99.9
gridWalk Main 134 16004000 67.5 52.7 76.7 72.2
cmp Main 137 16004000 4.2 0.0 4.2 0.0
updateFP Main 136 16004000 1.6 0.0 1.6 0.0
findYP Main 135 16004000 3.5 19.4 3.5 19.4
newVI1 Main 130 1 0.0 0.0 0.0 0.0
newVI1.\ Main 131 8004 0.0 0.0 0.0 0.0
CAF GHC.Conc.Signal 112 0 0.0 0.0 0.0 0.0
CAF GHC.IO.Encoding 104 0 0.0 0.0 0.0 0.0
CAF GHC.IO.Encoding.Iconv 102 0 0.0 0.0 0.0 0.0
CAF GHC.IO.Handle.FD 95 0 0.0 0.0 0.0 0.0
If I am interpreting the profiling output correctly (and assuming there are not too much distortions due to profiling), gridWalk takes most of the time, but the main functions cmp and findYP that do the heavy lifting in gridWalk, seem to take very little time in profiling report. So, perhaps the bottleneck is in forM_ wrapper that findSnakes function uses to call gridWalk? The heap profile too seems very clean:
Reading the core, nothing really jumps out. I thought some values in inner loops might be boxed, but I don't spot them in the core. I hope the performance issue is due to something simple I have missed.
Update
Following suggestion of #DanielFischer, I replaced forM_ of Data.Vector.Unboxed with that of Control.Monad in findSnakes function which improved the performance from 4x to 2.5x of C version. Haskell and C versions are now posted here if you want to try them out.
I am still digging through the core to see where the bottlenecks are. gridWalk is most frequently called function, and for it to perform well, lcsh should reduce whileM_ loop to a nice iterative inner loop of condition check and inlined findSnakes code. I suspect that in assembly, this is not the case for whileM_ loop, but since I am not very knowledgeable about translating core, and locating name-mangled GHC functions in assembly, I guess it is just a matter of patiently plugging away at the problem until I figure it out. Meanwhile if there are any pointers on performance fixes, they will be appreciated.
Another possibility that I can think of is overhead of heap checks during function calls. As seen in profiling report, gridWalk is called 16004000 times. Assuming 6 cycles for heap check (I am guessing it is less, but still let us assume that), it is ~0.02 seconds on a 3.33GHz box for 96024000 cycles.
Also, some performance numbers:
Haskell code (GHC 7.6.1 x86_64): It was ~0.25s before forM_ fix.
time ./T
1
real 0m0.150s
user 0m0.145s
sys 0m0.003s
C code (gcc 4.7.2 x86_64):
time ./test
1
real 0m0.065s
user 0m0.063s
sys 0m0.000s
Update 2:
Updated code is here. Using STUArray doesn't change the numbers either. The performance is about 1.5x on Mac OS X (x86_64,ghc7.6.1), pretty similar to what #DanielFischer reported on Linux.
Haskell code:
$ time ./Diff
1
real 0m0.087s
user 0m0.084s
sys 0m0.003s
C code:
$ time ./test
1
real 0m0.056s
user 0m0.053s
sys 0m0.002s
Glancing at the cmm, the call is tail-recursive, and is turned into a loop by llvm. But each new iteration seems to allocate new values which invokes heap check too, and so, might explain the difference in performance. I have to think about how to write the tail recursion in such a way such that no values are allocated across iterations, avoiding heap-check and allocation overhead.
You take a huge hit at
U.forM_ (U.fromList [0..ct-1])
in findSnakes. I'm convinced that isn't supposed to happen (ticket?), but that allocates a new Vector to traverse every time findSnakes is called. If you use
Control.Monad.forM_ [0 .. ct-1]
instead, the running time roughly halves, and the allocation drops by a factor of about 500 here. (GHC optimises C.M.forM_ [0 :: Int .. limit] well, the list is eliminated, and what remains is basically a loop.) You can do slightly better by writing the loop yourself.
Some things that cause gratuitous allocation/code size bloat without hurting performance much are
the unused Bool argument of lcsh
the cmp argument to findSnakes and gridWalk; if these are never called with a different comparison than the top-level cmp, that argument leads to unnecessary code duplication.
the general type of while; specialising it to the used type ST s Bool -> ST s () -> ST s () reduces allocation (much), and also running time (slightly, but noticeably, here).
A general word about profiling: Compiling a programme for profiling inhibits many optimisations. In particular for libraries like vector, bytestring or text that make heavy use of fusion, profiling often produces misleading results.
For example, your original code produces here
total time = 3.42 secs (3415 ticks # 1000 us, 1 processor)
total alloc = 4,612,756,880 bytes (excludes profiling overheads)
COST CENTRE MODULE %time %alloc ticks bytes
gridWalk Main 63.7 52.7 2176 2432608000
findSnakes Main 20.0 27.8 682 1281440080
cmp Main 9.2 0.0 313 16
findYP Main 4.2 19.4 144 896224000
updateFP Main 2.7 0.0 91 0
Just adding a bang on the binding of len in gridWalk changes nothing at all in the non-profiling version, but for the profiling version
total time = 2.98 secs (2985 ticks # 1000 us, 1 processor)
total alloc = 3,204,404,880 bytes (excludes profiling overheads)
COST CENTRE MODULE %time %alloc ticks bytes
gridWalk Main 63.0 32.0 1881 1024256000
findSnakes Main 22.2 40.0 663 1281440080
cmp Main 7.2 0.0 214 16
findYP Main 4.7 28.0 140 896224000
updateFP Main 2.7 0.0 82 0
it makes a lot of difference. For the version including the changes mentioned above (and the bang on len in gridWalk), the profiling version says
total alloc = 1,923,412,776 bytes (excludes profiling overheads)
but the non-profiling version
1,814,424 bytes allocated in the heap
10,808 bytes copied during GC
49,064 bytes maximum residency (2 sample(s))
25,912 bytes maximum slop
1 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 2 colls, 0 par 0.00s 0.00s 0.0000s 0.0000s
Gen 1 2 colls, 0 par 0.00s 0.00s 0.0001s 0.0001s
INIT time 0.00s ( 0.00s elapsed)
MUT time 0.12s ( 0.12s elapsed)
GC time 0.00s ( 0.00s elapsed)
EXIT time 0.00s ( 0.00s elapsed)
Total time 0.12s ( 0.12s elapsed)
says it allocated 1000-fold less than the profiling version.
For vector and friends code, more reliable for identifying bottlenecks than profiling (unfortunately also much much more time-consuming and difficult) is studying the generated core (or assembly, if you are proficient in reading that).
Concerning the update, the C runs a little slower on my box (gcc-4.7.2, -O3)
$ time ./miltest1
real 0m0.074s
user 0m0.073s
sys 0m0.001s
but the Haskell about the same
$ time ./hsmiller
1
real 0m0.151s
user 0m0.149s
sys 0m0.001s
That is a little faster when compiling via the LLVM backend:
$ time ./hsmiller1
real 0m0.131s
user 0m0.129s
sys 0m0.001s
And when we replace the forM_ with a manual loop,
findSnakes a b fp !k !ct op = go 0
where
go x
| x < ct = gridWalk a b fp (op k x) >> go (x+1)
| otherwise = return ()
it gets a bit faster,
$ time ./hsmiller
1
real 0m0.124s
user 0m0.121s
sys 0m0.002s
resp. via LLVM:
$ time ./hsmiller
1
real 0m0.108s
user 0m0.107s
sys 0m0.000s
By and large, the generated core looks fine, one small annoyance was
Main.$wa
:: forall s.
GHC.Prim.Int#
-> GHC.Types.Int
-> GHC.Prim.State# s
-> (# GHC.Prim.State# s, Main.MVI1 s #)
and a slightly roundabout implementation. That is fixed by making newVI1 strict in its second argument,
newVI1 n !x = do
Since that isn't called often, the effect on performance is of course negligible.
The meat is the core for lcsh, and that doesn't look too bad. The only boxed things in that are the Ints read from /written to the STRef, and that is inevitable. What's not so pleasant is that the core contains a lot of code duplication, but in my experience, that rarely is a real performance problem, and not all duplicated code survives the code generation.
and for it to perform well, lcsh should reduce whileM_ loop to a nice iterative inner loop of condition check and inlined findSnakes code.
You get an inner loop when you add an INLINE pragma to whileM_, but that loop is not nice, and, in this case it is much slower than having the whileM_ out-of-line (I'm not sure whether it's solely due to code size, but it could be).

OpenCL matrix multiplication should be faster?

I'm trying to learn how to make GPU optimalized OpenCL kernells, I took example of matrix multiplication using square tiles in local memory. However I got at best case just ~10-times speedup ( ~50 Gflops ) in comparison to numpy.dot() ( 5 Gflops , it is using BLAS).
I found studies where they got speedup >200x ( >1000 Gflops ).
ftp://ftp.u-aizu.ac.jp/u-aizu/doc/Tech-Report/2012/2012-002.pdf
I don't know what I'm doing wrong, or if it is just because of my GPU ( nvidia GTX 275 ). Or if it is because of some pyOpenCl overhead. But I meassured also how long does take just to copy result from GPU to RAM and it is just ~10% of the matrix multiplication time.
#define BLOCK_SIZE 22
__kernel void matrixMul(
__global float* Cij,
__global float* Aik,
__global float* Bkj,
__const int ni,
__const int nj,
__const int nk
){
// WARRNING : interchange of i and j dimension lower the performance >2x on my nV GT275 GPU
int gj = get_global_id(0); int gi = get_global_id(1);
int bj = get_group_id(0); int bi = get_group_id(1); // Block index
int tj = get_local_id(0); int ti = get_local_id(1); // Thread index
int oj = bi*BLOCK_SIZE; int oi = bj*BLOCK_SIZE;
float Csub =0;
__local float As [BLOCK_SIZE][BLOCK_SIZE];
__local float Bs [BLOCK_SIZE][BLOCK_SIZE];
for (int ok = 0; ok < nk; ok += BLOCK_SIZE ) {
As[ti][tj] = Aik[ nk*(gi ) + tj + ok ]; // A[i][k]
Bs[ti][tj] = Bkj[ nj*(ti+ok) + gj ]; // B[k][j]
barrier(CLK_LOCAL_MEM_FENCE);
for (int k = 0; k < BLOCK_SIZE; ++k) Csub += As[ti][k] * Bs[k][tj];
barrier(CLK_LOCAL_MEM_FENCE);
}
Cij[ nj * ( gi ) + gj ] = Csub;
}
NOTE - the strange BLOCK_SIZE=22 is the maximum BLOCK_SIZE which does fit to max work_group_size which is 512 on my GPU. In this code must hold condition BLOCK_SIZE^2 < max work_group_size. 22=int(sqrt(512)). I tried also BLOCK_SIZE=16 or 8 but it was slower tan with 22.
I also tried simple matrixMul (without using local memory) but it was even 10-times slower than numpy.dot().
I copied the code here
http://gpgpu-computing4.blogspot.cz/2009/10/matrix-multiplication-3-opencl.html
they say that even the simple version (without local memory) should run 200x faster than CPU? I don't undrestand that.
the dependecne of performance in my case is:
N = 220 numpy 3.680 [Gflops] GPU 16.428 [Gflops] speedUp 4.464
N = 330 numpy 4.752 [Gflops] GPU 29.487 [Gflops] speedUp 6.205
N = 440 numpy 4.914 [Gflops] GPU 37.096 [Gflops] speedUp 7.548
N = 550 numpy 3.849 [Gflops] GPU 47.019 [Gflops] speedUp 12.217
N = 660 numpy 5.251 [Gflops] GPU 49.999 [Gflops] speedUp 9.522
N = 770 numpy 4.565 [Gflops] GPU 48.567 [Gflops] speedUp 10.638
N = 880 numpy 5.452 [Gflops] GPU 44.444 [Gflops] speedUp 8.152
N = 990 numpy 4.976 [Gflops] GPU 42.187 [Gflops] speedUp 8.478
N = 1100 numpy 5.324 [Gflops] GPU 83.187 [Gflops] speedUp 15.625
N = 1210 numpy 5.401 [Gflops] GPU 57.147 [Gflops] speedUp 10.581
N = 1320 numpy 5.450 [Gflops] GPU 48.936 [Gflops] speedUp 8.979
NOTE - the "Gflops" number is obtained as N^3/time and it does include time required to copy results from GPU to main memory, but this time is just few percent of total time especially for N>1000
maybe more pictorial is time in secons:
N = 220 numpy 0.003 [s] GPU 0.001 [s] load 0.001 [s] speedUp 5.000
N = 330 numpy 0.008 [s] GPU 0.001 [s] load 0.001 [s] speedUp 7.683
N = 440 numpy 0.017 [s] GPU 0.002 [s] load 0.001 [s] speedUp 7.565
N = 550 numpy 0.043 [s] GPU 0.004 [s] load 0.001 [s] speedUp 11.957
N = 660 numpy 0.055 [s] GPU 0.006 [s] load 0.002 [s] speedUp 9.298
N = 770 numpy 0.100 [s] GPU 0.009 [s] load 0.003 [s] speedUp 10.638
N = 880 numpy 0.125 [s] GPU 0.010 [s] load 0.000 [s] speedUp 12.097
N = 990 numpy 0.195 [s] GPU 0.015 [s] load 0.000 [s] speedUp 12.581
N = 1100 numpy 0.250 [s] GPU 0.031 [s] load 0.000 [s] speedUp 8.065
N = 1210 numpy 0.328 [s] GPU 0.031 [s] load 0.000 [s] speedUp 10.581
N = 1320 numpy 0.422 [s] GPU 0.047 [s] load 0.000 [s] speedUp 8.979
I was thinking that maybe some speed improvement can be obtained using
async_work_group_copy or even read_imageui to copy blocks to local memory. But I don't understand why I have so big difference when I'm using basically the same code as people who say they have 200x speedup?????
Without even looking at your code let me make some comments about your benchmarks. Let's ignore numpy and compare the maximum SP FLOPs/s and DP FLOPs/s of an Intel CPU versus Nvidia and AMD GPUs.
A Intel 2600K at 4 GHz can do 4 GHz * (8 AVX) * (2 ILP) * ( 4 cores) = 256 SP GFLOPs/s. For DP it's half: 128 DP GFLOPs/s. Haswell which comes out in a few weeks will double both of those. The Intel MKL library gets better than 80% efficiency in GEMM. My own GEMM code gets 70% on my i7-2700 so the 5 GFlops/s you quote with numpy is tiny and not fair to compare with.
I don't know what the GTX 275 is capable of but I would guess it's much more than 50 GFLOPs/s.
The article you reference compares a AMD 7970. They get 848 (90% efficiency) DP GFlops/s and 2646 (70% efficiency) SP GFlops/s. That's closer to 10x the performance of the CPU not 200x!
Edit:
Your calculations of FLOPs is wrong it should be 2.0*n^3. That's still approximate but it's asymptotically true. Let me explain.
Consider a 3D dot product. It's x1*x2+y1*y2+z1*z2. That's 3 multiplications and two additions. So a N-dimensional dot product is n multiplications and (n-1) additions. A matrix product is equivalent to nxn dot products, i.e. n*n*n multiplications and n*n*(n-1) additions. That's approximately 2.0*n^3 FLOPS. So you should double all your Gflops/s numbers.
Edit:
You might want to consider the kernel time. It's been awhile since I used OpenCL but using the C++ bindings I did something like this
queue = cl::CommandQueue(context, devices[device], CL_QUEUE_PROFILING_ENABLE|CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE, &err);
//other code...run kernel
time_end = clevent.getProfilingInfo<CL_PROFILING_COMMAND_END>();
time_start = clevent.getProfilingInfo<CL_PROFILING_COMMAND_START>();
A good GPU matrix-multiply does not just use local memory, it stores blocks of A, B, and/or C in registers (which results in higher register usage and lower occupancy but is much faster in the end). This is because GPUs have more registers than local memory (128-256KB vs 48KB for NVIDIA), and registers offer as much bandwidth as the ALUs can handle.

Profiling a Haskell program

I have a piece of code that repeatedly samples from a probability distribution using sequence. Morally, it does something like this:
sampleMean :: MonadRandom m => Int -> m Float -> m Float
sampleMean n dist = do
xs <- sequence (replicate n dist)
return (sum xs)
Except that it's a bit more complicated. The actual code I'm interested in is the function likelihoodWeighting at this Github repo.
I noticed that the running time scales nonlinearly with n. In particular, once n exceeds a certain value it hits the memory limit, and the running time explodes. I'm not certain, but I think this is because sequence is building up a long list of thunks which aren't getting evaluated until the call to sum.
Once I get past about 100,000 samples, the program slows to a crawl. I'd like to optimize this (my feeling is that 10 million samples shouldn't be a problem) so I decided to profile it - but I'm having a little trouble understanding the output of the profiler.
Profiling
I created a short executable in a file main.hs that runs my function with 100,000 samples. Here's the output from doing
$ ghc -O2 -rtsopts main.hs
$ ./main +RTS -s
First things I notice - it allocates nearly 1.5 GB of heap, and spends 60% of its time on garbage collection. Is this generally indicative of too much laziness?
1,377,538,232 bytes allocated in the heap
1,195,050,032 bytes copied during GC
169,411,368 bytes maximum residency (12 sample(s))
7,360,232 bytes maximum slop
423 MB total memory in use (0 MB lost due to fragmentation)
Generation 0: 2574 collections, 0 parallel, 2.40s, 2.43s elapsed
Generation 1: 12 collections, 0 parallel, 1.07s, 1.28s elapsed
INIT time 0.00s ( 0.00s elapsed)
MUT time 1.92s ( 1.94s elapsed)
GC time 3.47s ( 3.70s elapsed)
RP time 0.00s ( 0.00s elapsed)
PROF time 0.23s ( 0.23s elapsed)
EXIT time 0.00s ( 0.00s elapsed)
Total time 5.63s ( 5.87s elapsed)
%GC time 61.8% (63.1% elapsed)
Alloc rate 716,368,278 bytes per MUT second
Productivity 34.2% of total user, 32.7% of total elapsed
Here are the results from
$ ./main +RTS -p
The first time I ran this, it turned out that there was one function being called repeatedly, and it turned out I could memoize it, which sped things up by a factor of 2. It didn't solve the space leak, however.
COST CENTRE MODULE no. entries %time %alloc %time %alloc
MAIN MAIN 1 0 0.0 0.0 100.0 100.0
main Main 434 4 0.0 0.0 100.0 100.0
likelihoodWeighting AI.Probability.Bayes 445 1 0.0 0.3 100.0 100.0
distributionLW AI.Probability.Bayes 448 1 0.0 2.6 0.0 2.6
getSampleLW AI.Probability.Bayes 446 100000 20.0 50.4 100.0 97.1
bnProb AI.Probability.Bayes 458 400000 0.0 0.0 0.0 0.0
bnCond AI.Probability.Bayes 457 400000 6.7 0.8 6.7 0.8
bnVals AI.Probability.Bayes 455 400000 20.0 6.3 26.7 7.1
bnParents AI.Probability.Bayes 456 400000 6.7 0.8 6.7 0.8
bnSubRef AI.Probability.Bayes 454 800000 13.3 13.5 13.3 13.5
weightedSample AI.Probability.Bayes 447 100000 26.7 23.9 33.3 25.3
bnProb AI.Probability.Bayes 453 100000 0.0 0.0 0.0 0.0
bnCond AI.Probability.Bayes 452 100000 0.0 0.2 0.0 0.2
bnVals AI.Probability.Bayes 450 100000 0.0 0.3 6.7 0.5
bnParents AI.Probability.Bayes 451 100000 6.7 0.2 6.7 0.2
bnSubRef AI.Probability.Bayes 449 200000 0.0 0.7 0.0 0.7
Here's a heap profile. I don't know why it claims the runtime is 1.8 seconds - this run took about 6 seconds.
Can anyone help me to interpret the output of the profiler - i.e. to identify where the bottleneck is, and provide suggestions for how to speed things up?
A huge improvement has already been achieved by incorporating JohnL's suggestion of using foldM in likelihoodWeighting. That reduced memory usage about tenfold here, and brought down the GC times significantly to almost or actually negligible.
A profiling run with the current source yields
probabilityIO AI.Util.Util 26.1 42.4 413 290400000
weightedSample.go AI.Probability.Bayes 16.1 19.1 255 131200080
bnParents AI.Probability.Bayes 10.8 1.2 171 8000384
bnVals AI.Probability.Bayes 10.4 7.8 164 53603072
bnCond AI.Probability.Bayes 7.9 1.2 125 8000384
ndSubRef AI.Util.Array 4.8 9.2 76 63204112
bnSubRef AI.Probability.Bayes 4.7 8.1 75 55203072
likelihoodWeighting.func AI.Probability.Bayes 3.3 2.8 53 19195128
%! AI.Util.Util 3.3 0.5 53 3200000
bnProb AI.Probability.Bayes 2.5 0.0 40 16
bnProb.p AI.Probability.Bayes 2.5 3.5 40 24001152
likelihoodWeighting AI.Probability.Bayes 2.5 2.9 39 20000264
likelihoodWeighting.func.x AI.Probability.Bayes 2.3 0.2 37 1600000
and 13MB memory usage reported by -s, ~5MB maximum residency. That's not too bad already.
Still, there remain some points we can improve. First, a relatively minor thing, in the grand scheme, AI.UTIl.Array.ndSubRef:
ndSubRef :: [Int] -> Int
ndSubRef ns = sum $ zipWith (*) (reverse ns) (map (2^) [0..])
Reversing the list, and mapping (2^) over another list is inefficient, better is
ndSubRef = L.foldl' (\a d -> 2*a + d) 0
which doesn't need to keep the entire list in memory (probably not a big deal, since the lists will be short) as reversing it does, and doesn't need to allocate a second list. The reduction in allocation is noticeable, about 10%, and that part runs measurably faster,
ndSubRef AI.Util.Array 1.7 1.3 24 8000384
in the profile of the modified run, but since it takes only a small part of the overall time, the overall impact is small. There are potentially bigger fish to fry in weightedSample and likelihoodWeighting.
Let's add a bit of strictness in weightedSample to see how that changes things:
weightedSample :: Ord e => BayesNet e -> [(e,Bool)] -> IO (Map e Bool, Prob)
weightedSample bn fixed =
go 1.0 (M.fromList fixed) (bnVars bn)
where
go w assignment [] = return (assignment, w)
go w assignment (v:vs) = if v `elem` vars
then
let w' = w * bnProb bn assignment (v, fixed %! v)
in go w' assignment vs
else do
let p = bnProb bn assignment (v,True)
x <- probabilityIO p
go w (M.insert v x assignment) vs
vars = map fst fixed
The weight parameter of go is never forced, nor is the assignment parameter, thus they can build up thunks. Let's enable {-# LANGUAGE BangPatterns #-} and force updates to take effect immediately, also evaluate p before passing it to probabilityIO:
go w assignment (v:vs) = if v `elem` vars
then
let !w' = w * bnProb bn assignment (v, fixed %! v)
in go w' assignment vs
else do
let !p = bnProb bn assignment (v,True)
x <- probabilityIO p
let !assignment' = M.insert v x assignment
go w assignment' vs
That brings a further reduction in allocation (~9%) and a small speedup (~%13%), but the total memory usage and maximum residence haven't changed much.
I see nothing else obvious to change there, so let's look at likelihoodWeighting:
func m _ = do
(a, w) <- weightedSample bn fixed
let x = a ! e
return $! x `seq` w `seq` M.adjust (+w) x m
In the last line, first, w is already evaluated in weightedSample now, so we don't need to seq it here, the key x is required to evaluate the updated map, so seqing that isn't necessary either. The bad thing on that line is M.adjust. adjust has no way of forcing the result of the updated function, so that builds thunks in the map's values. You can force evaluation of the thunks by looking up the modified value and forcing that, but Data.Map provides a much more convenient way here, since the key at which the map is updated is guaranteed to be present, insertWith':
func !m _ = do
(a, w) <- weightedSample bn fixed
let x = a ! e
return (M.insertWith' (+) x w m)
(Note: GHC optimises better with a bang-pattern on m than with return $! ... here). That slightly reduces the total allocation and doesn't measurably change the running time, but has a great impact on total memory used and maximum residency:
934,566,488 bytes allocated in the heap
1,441,744 bytes copied during GC
68,112 bytes maximum residency (1 sample(s))
23,272 bytes maximum slop
1 MB total memory in use (0 MB lost due to fragmentation)
The biggest improvement in running time to be had would be by avoiding randomIO, the used StdGen is very slow.
I am surprised how much time the bn* functions take, but don't see any obvious inefficiency in those.
I have trouble digesting these profiles, but I have gotten my ass kicked before because the MonadRandom on Hackage is strict. Creating a lazy version of MonadRandom made my memory problems go away.
My colleague has not yet gotten permission to release the code, but I've put Control.Monad.LazyRandom online at pastebin. Or if you want to see some excerpts that explain a fully lazy random search, including infinite lists of random computations, check out Experience Report: Haskell in Computational Biology.
I put together a very elementary example, posted here: http://hpaste.org/71919. I'm not sure if it's anything like your example.. just a very minimal thing that seemed to work.
Compiling with -prof and -fprof-auto and running with 100000 iterations yielded the following head of the profiling output (pardon my line numbers):
8 COST CENTRE MODULE %time %alloc
9
10 sample AI.Util.ProbDist 31.5 36.6
11 bnParents AI.Probability.Bayes 23.2 0.0
12 bnRank AI.Probability.Bayes 10.7 23.7
13 weightedSample.go AI.Probability.Bayes 9.6 13.4
14 bnVars AI.Probability.Bayes 8.6 16.2
15 likelihoodWeighting AI.Probability.Bayes 3.8 4.2
16 likelihoodWeighting.getSample AI.Probability.Bayes 2.1 0.7
17 sample.cumulative AI.Util.ProbDist 1.7 2.1
18 bnCond AI.Probability.Bayes 1.6 0.0
19 bnRank.ps AI.Probability.Bayes 1.1 0.0
And here are the summary statistics:
1,433,944,752 bytes allocated in the heap
1,016,435,800 bytes copied during GC
176,719,648 bytes maximum residency (11 sample(s))
1,900,232 bytes maximum slop
400 MB total memory in use (0 MB lost due to fragmentation)
INIT time 0.00s ( 0.00s elapsed)
MUT time 1.40s ( 1.41s elapsed)
GC time 1.08s ( 1.24s elapsed)
Total time 2.47s ( 2.65s elapsed)
%GC time 43.6% (46.8% elapsed)
Alloc rate 1,026,674,336 bytes per MUT second
Productivity 56.4% of total user, 52.6% of total elapsed
Notice that the profiler pointed its finger at sample. I forced the return in that function by using $!, and here are some summary statistics afterwards:
1,776,908,816 bytes allocated in the heap
165,232,656 bytes copied during GC
34,963,136 bytes maximum residency (7 sample(s))
483,192 bytes maximum slop
68 MB total memory in use (0 MB lost due to fragmentation)
INIT time 0.00s ( 0.00s elapsed)
MUT time 2.42s ( 2.44s elapsed)
GC time 0.21s ( 0.23s elapsed)
Total time 2.63s ( 2.68s elapsed)
%GC time 7.9% (8.8% elapsed)
Alloc rate 733,248,745 bytes per MUT second
Productivity 92.1% of total user, 90.4% of total elapsed
Much more productive in terms of GC, but not much changed on the time. You might be able to keep iterating in this profile/tweak fashion to target your bottlenecks and eke out some better performance.
I think your initial diagnosis is correct, and I've never seen a profiling report that's useful once memory effects kick in.
The problem is that you're traversing the list twice, once for sequence and again for sum. In Haskell, multiple list traversals of large lists are really, really bad for performance. The solution is generally to use some type of fold, such as foldM. Your sampleMean function can be written as
{-# LANGUAGE BangPatterns #-}
sampleMean2 :: MonadRandom m => Int -> m Float -> m Float
sampleMean2 n dist = foldM (\(!a) mb -> liftM (+a) mb) 0 $ replicate n dist
for example, traversing the list only once.
You can do the same sort of thing with likelihoodWeighting as well. In order to prevent thunks, it's important to make sure that the accumulator in your fold function has appropriate strictness.

Resources