Why does this code grow so large in memory? - performance

if I call crash with any value for increment and 50000000 (fifty million) for spins, it ramps up and keeps growing in memory size until has choked every last bit of memory and then crashes.
crash :: Int -> Int -> Int
crash increment spins = snd $ foldl' spin' (0,0) [1..spins]
where spin' = spin increment
spin increment (index,element1) spinNumber = (next,nextElementOne)
where
next
| indexNIncrement >= spinNumber = 1 + (indexNIncrement `rem` spinNumber)
| otherwise = 1 + indexNIncrement
indexNIncrement = index + increment
nextElementOne
| next==1 = spinNumber
| otherwise = element1
I don't see how memory is leaked. Doesn't each call to spin replace the accumulator value? Doesn't it get released?

Basically, at each step foldl' will evaluate the result of the function spin' at every step, bringing it to WHNF (weak head normal form). Concretely, this means that the result will be evaluated until the first constructor.
However, the result of spin' is (next,nextElementOne), which is already in WHNF, since it starts with a pair constructor. What we want is to force the evaluation of the pair components here. One basic solution is to return
spin ... = next `seq` (nextElementOne `seq` (next, nextElementOne))
so that the components will be evaluated before returning the pair.
A more modern approach could be exploiting BangPatterns, instead.

Related

Haskell: map length . group is way slower than explicit recursion?

Consider this trivial algorithm of prime-decomposition of an integer n: Let d' be the divisor of n last found. Initially, set d'=1. Find the smallest divisor d>d' of n, and find the maximal value e such that de divides n. Append de to the answer and repeat the procedure on n/de. Finally, stop when n becomes 1. For simplicity, let's ignore mathematical optimizations, like stop at sqrt n etc.
I have implemented it in two ways. The first one generates a list of division "attempts", and then groups the successful ones by divisors. For example, for n=20, we first generate [(2,20),(2,10),(2,5),(3,5),(4,5),(5,5),(5,1)], which we then transform to the desired [(2,2),(5,1)] using group and other library functions.
The second implementation is an explicit recursion which keeps track of the exponent e along the way, appends de to the answer once the maximal e is reached, proceeds to finding the "next" d, and so on.
Question 1: Why does the first implementation run way slower than the second, despite the following:
Both the implementations execute div, the core step of the algorithm, roughly the same number of times.
Lazy evaluation (and fusion?) has the effect that the long list illustrated above never has to be materialized in the first place. As you can see in the code below, divTrials n, the list I am talking about, is transformed by a chain of higher order functions. In that, I think that the part map (\xs-> (head xs,length xs)) ... group should tell the compiler that the list is just intermediate:
{-# OPTIONS_GHC -O2 #-}
module GroupCheck where
import Data.List
import Data.Maybe
implement1 :: Integral t=> t -> [(t,Int)] -- IMPLEMENTATION 1
implement1 = map (\xs-> (head xs,length xs)).factorGroups where
tryDiv (d,n)
| n `mod` d == 0 = (d,n `div` d)
| n == 1 = (1,1) -- hack
| otherwise = (d+1,n)
divTrials n = takeWhile (/=(1,1)) $ (2,n): map tryDiv (divTrials n)
factorGroups = filter (not.null).map tail.group.map fst.divTrials
implement2 :: Show t => Integral t => t -> [(t,Int)] -- IMPLEMENTATION 2
implement2 num = keep2 $ tail $ go (1,0,1,num) where
range d n = [d+1..n]
nextd d n = fromMaybe n $ find ((0==).(n`mod`)) (range d n)
update (d,e,de,n)
| n `mod` d == 0 = update (d,e+1,de*d,n`div`d)
| otherwise = (d,e,de,n)
go (d,e,de,1) = [(d,e,de,1)]
go (d,e,de,n) = (d,e,de,n) : go (update (nextd d n,0,1,n))
keep2 = map (\(d,e,_,_)->(d,e))
main :: IO ()
main = do
let n = 293872
let ans1 = implement1 n
let ans2 = implement2 n
print ans1
print ans2
Profiling tells us that tryDiv and divTrials together eat up >99% of the entire execution time:
> stack ghc -- -main-is GroupCheck.main -prof -fprof-auto -rtsopts GroupCheck
> ./GroupCheck +RTS -p >/dev/null && cat GroupCheck.prof
GroupCheck +RTS -p -RTS
total time = 18.34 secs (18338 ticks # 1000 us, 1 processor)
total alloc = 17,561,404,568 bytes (excludes profiling overheads)
COST CENTRE MODULE SRC %time %alloc
implement1.divTrials GroupCheck GroupCheck.hs:12:3-69 52.6 69.2
implement1.tryDiv GroupCheck GroupCheck.hs:(8,3)-(11,25) 47.2 30.8
Question 1.5: So.. what's so bad about these functions? Also,
Question 2: In a more general case of having to aggregate contiguous blocks of identical elements from a nondecreasing sequence, should we go the bulky implement2 way if we want speed? (Again, ignoring domain-specific optimizations.)
Or did I totally miss something obvious? Thanks!
Just to establish a baseline, I ran your program on a slightly larger starting number (so that time didn't print out 0.00s). I chose n = 2938722345623 for no particular reason. Here's the timings before starting to tweak things:
ans1: indistinguishable from infinity (I finished writing this entire answer and it was still running, about 26 minutes in total)
ans2: 2.78s
The first thing to try is to tweak this line:
divTrials n = takeWhile (/=(1,1)) $ (2,n): map tryDiv (divTrials n)
This looks like a pretty natural definition, but it turns out that GHC never memoizes function calls. So if you want to make a list that's defined recursively in terms of itself, you must not make a function call in the recursion. Here's how:
divTrials n = xs where xs = takeWhile (/=(1,1)) $ (2,n): map tryDiv xs
Just that change brings the time down to 7.85s. Still off by a factor of about 3, but much much better.
The less obvious problem lies here:
factorGroups = filter (not.null).map tail.group.map fst.divTrials
Putting the group so early breaks fusion, causing that intermediate list to actually be materialized. This means allocating and deallocating a lot of cons cells and tuples. Here's an implementation that has the same spirit, but puts more work before the group:
tryDiv d n
| n `mod` d == 0 = d : tryDiv d (n `div` d)
| n == 1 = []
| otherwise = tryDiv (d+1) n
factorGroups = group . tryDiv 2
With that, we are down to 2.65s -- slightly faster than ans2, though I only did one test of each so it's pretty likely to just be measurement noise.

Can I do two sums in parallel with collection functions, in F#, or generally, optimize the whole thing

I have the following code:
// volume queue
let volumeQueue = Queue<float>()
let queueSize = 10 * 500 // 10 events per second, 500 seconds max
// add a signed volume to the queue
let addToVolumeQueue x =
volumeQueue.Enqueue(x)
while volumeQueue.Count > queueSize do volumeQueue.TryDequeue() |> ignore
// calculate the direction of the queue, normalized between +1 (buy) and -1 (sell)
let queueDirection length =
let subQueue =
volumeQueue
|> Seq.skip (queueSize - length)
let boughtVolume =
subQueue
|> Seq.filter (fun l -> l > 0.)
|> Seq.sum
let totalVolume =
subQueue
|> Seq.sumBy (fun l -> abs l)
2. * boughtVolume / totalVolume - 1.
What this does is run a fixed length queue to which transaction volumes are added, some positive, some negative.
And then it calculates the cumulative ratio of positive over negative entries and normalizes it between +1 and -1, with 0 meaning the sums are half / half.
There is no optimization right now but this code's performance will matter. So I'd like to make it fast, without compromising readability (it's called roughly every 100ms).
The first thing that comes to mind is to do the two sums at once (the positive numbers and all the numbers) in a single loop. It can easily be done in a for loop, but can it be done with collection functions?
The next option I was thinking about is to get rid of the queue and use a circular buffer, but since the code is run on a part of the buffer (the last 'length' items), I'd have to handle the wrap around part; I guess I could extend the buffer to the size of a power of 2 and get automatic wrap around that way.
Any idea is welcome, but my first original question is: can I do the two sums in a single pass with the collection functions? I can't iterate in the queue with an indexer, so I can't use a for loop (or I guess I'd have to instance an iterator)
First of all, there is nothing inherently wrong with using mutable variables and loops in F#. Especially at a small scale (e.g. inside a function), this can often be quite readable - or at least, easy to understand if there is a suitable comment.
To do this using a single iteration, you could use fold. This basically calculates the two sums in a single iteration at the cost of some readability:
let queueDirectionFold length =
let boughtVolume, totalVolume =
volumeQueue
|> Seq.skip (queueSize - length)
|> Seq.fold (fun (bv, tv) v ->
(if v > 0.0 then bv else bv + v), tv + abs v) (0.0, 0.0)
2. * boughtVolume / totalVolume - 1.
As I mentioned earlier, I would also consider using a loop. The loop itself is quite simple, but some complexity is added by the fact that you need to skip some elements. Still, I think it's quite clear:
let queueDirectionLoop length =
let mutable i = 0
let mutable boughtVolume = 0.
let mutable totalVolume = 0.
for v in volumeQueue do
if i >= queueSize - length then
totalVolume <- totalVolume + abs v
if v > 0. then boughtVolume <- boughtVolume + v
i <- i + 1
2. * boughtVolume / totalVolume - 1.
I tested the performance using 4000 elements and here is what I got:
#time
let rnd = System.Random()
for i in 0 .. 4000 do volumeQueue.Enqueue(rnd.NextDouble())
for i in 0 .. 10000 do ignore(queueDirection 1000) // ~900 ms
for i in 0 .. 10000 do ignore(queueDirectionFold 1000) // ~460 ms
for i in 0 .. 10000 do ignore(queueDirectionLoop 1000) // ~370 ms
Iterating over the queue just once definitely helps with performance. Doing this in an imperative loop helps the performance even more - this may be worth it if you care about performance. The code may be a bit less readable than the original, but I think it's not much worse than fold.

Performance of Longest Substring Without Repeating Characters in Haskell

Upon reading this Python question and proposing a solution, I tried to solve the same challenge in Haskell.
I've come up with the code below, which seems to work. However, since I'm pretty new to this language, I'd like some help in understand whether the code is good performancewise.
lswrc :: String -> String
lswrc s = reverse $ fst $ foldl' step ("","") s
where
step ("","") c = ([c],[c])
step (maxSubstr,current) c
| c `elem` current = step (maxSubstr,init current) c
| otherwise = let candidate = (c:current)
longerThan = (>) `on` length
newMaxSubstr = if maxSubstr `longerThan` candidate
then maxSubstr
else candidate
in (newMaxSubstr, candidate)
Some points I think could be better than they are
I carry on a pair of strings (the longest tracked, and the current candidate) but I only need the former; thinking procedurally, there's no way to escape this, but maybe FP allows another approach?
I construct (c:current) but I use it only in the else; I could make a more complicated longerThan to add 1 to the lenght of its second argument, so that I can apply it to maxSubstr and current, and construct (c:current) in the else, without even giving it a name.
I drop the last element of current when c is in the current string, because I'm piling up the strings with :; I could instead pattern match when checking c against the string (as in c `elem` current#(a:as)), but then when adding the new character I should do current ++ [c], which I know is not as performant as c:current.
I use foldl' (as I know foldl doesn't really make sense); foldr could be an alternative, but since I don't see how laziness enters this problem, I can't tell which one would be better.
Running elem on every iteration makes your algorithm Ω(n^2) (for strings with no repeats). Running length on, in the worst case, every iteration makes your algorithm Ω(n^2) (for strings with no repeats). Running init a lot makes your algorithm Ω(n*sqrt(n)) (for strings that are sqrt(n) repetitions of a sqrt(n)-long string, with every other one reversed, and assuming an O(1) elem replacement).
A better way is to pay one O(n) cost up front to copy into a data structure with constant-time indexing, and to keep a set (or similar data structure) of seen elements rather than a flat list. Like this:
import Data.Set (Set)
import Data.Vector (Vector)
import qualified Data.Set as S
import qualified Data.Vector as V
lswrc2 :: String -> String
lswrc2 "" = ""
lswrc2 s_ = go S.empty 0 0 0 0 where
s = V.fromList s_
n = V.length s
at = V.unsafeIndex s
go seen lo hi bestLo bestHi
| hi == n = V.toList (V.slice bestLo (bestHi-bestLo+1) s)
-- it is probably faster (possibly asymptotically so?) to use findIndex
-- to immediately pick the correct next value of lo
| at hi `S.member` seen = go (S.delete (at lo) seen) (lo+1) hi bestLo bestHi
| otherwise = let rec = go (S.insert (at hi) seen) lo (hi+1) in
if hi-lo > bestHi-bestLo then rec lo hi else rec bestLo bestHi
This should have O(n*log(n)) worst-case performance (achieving that worst case on strings with no repeats). There may be ways that are better still; I haven't thought super hard about it.
On my machine, lswrc2 consistently outperforms lswrc on random strings. On the string ['\0' .. '\100000'], lswrc takes about 40s and lswrc2 takes 0.03s. lswrc2 can handle [minBound .. maxBound] in about 0.4s; I gave up after more than 20 minutes of letting lswrc chew on that list.

Huge memory allocation running a julia function?

I try to run the following function in julia command, but when timing the function I see too much memory allocations which I can't figure out why.
function pdpf(L::Int64, iters::Int64)
snr_dB = -10
snr = 10^(snr_dB/10)
Pf = 0.01:0.01:1
thresh = rand(100)
Pd = rand(100)
for m = 1:length(Pf)
i = 0
for k = 1:iters
n = randn(L)
s = sqrt(snr) * randn(L)
y = s + n
energy_fin = (y'*y) / L
#inbounds thresh[m] = erfcinv(2Pf[m]) * sqrt(2/L) + 1
if energy_fin[1] >= thresh[m]
i += 1
end
end
#inbounds Pd[m] = i/iters
end
#thresh = erfcinv(2Pf) * sqrt(2/L) + 1
#Pd_the = 0.5 * erfc(((thresh - (snr + 1)) * sqrt(L)) / (2*(snr + 1)))
end
Running that function in the julia command on my laptop, I get the following shocking numbers:
julia> #time pdpf(1000, 10000)
17.621551 seconds (9.00 M allocations: 30.294 GB, 7.10% gc time)
What is wrong with my code? Any help is appreciated.
I don't think this memory allocation is so surprising. For instance, consider all of the times that the inner loop gets executed:
for m = 1:length(Pf) this gives you 100 executions
for k = 1:iters this gives you 10,000 executions based on the arguments you supply to the function.
randn(L) this gives you a random vector of length 1,000, based on the arguments you supply to the function.
Thus, just considering these, you've got 100*10,000*1000 = 1 billion Float64 random numbers being generated. Each one of them takes 64 bits = 8 bytes. I.e. 8GB right there. And, you've got two calls to randn(L) which means that you're at 16GB allocations already.
You then have y = s + n which means another 8GB allocations, taking you up to 24GB. I haven't looked in detail on the remaining code to get you from 24GB to 30GB allocations, but this should show you that it's not hard for the GB allocations to start adding up in your code.
If you're looking at places to improve, I'll give you a hint that these lines can be improved by using the properties of normal random variables:
n = randn(L)
s = sqrt(snr) * randn(L)
y = s + n
You should easily be able to cut down the allocations here from 24GB to 8GB in this way. Note that y will be a normal random variable here as you've defined it, and think up a way to generate a normal random variable with an identical distribution to what y has now.
Another small thing, snr is a constant inside your function. Yet, you keep taking its sqrt 1 million separate times. In some settings, 'checking your work' can be helpful, but I think that you can be confident the computer will get it right the first time and thus you don't need to make it keep re-doing this calculation ; ). There are other similar places you can improve your code to avoid duplicate computations here that I'll leave to you to locate.
aireties gives a good answer for why you have so many allocations. You can do more to reduce the number of allocations. Using this property we know that y = s+n is really y = sqrt(snr) * randn(L) + randn(L) and so we can instead do y = rvvar*randn(L) where rvvar= sqrt(1+sqrt(snr)^2) is defined outside the loop (thanks for the fix!). This will halve the number of random variables needed.
Outside the loop you can save sqrt(2/L) to cut down a little bit of time.
I don't think transpose is special-cased yet, so try using dot(y,y) instead of y'*y. I know dot for sure is just a loop without having to transpose, while the other may transpose depending on the version of Julia.
Something that would help performance (but not allocations) would be to use one big randn(L,iters) and loop through that. The reason is because if you make all of your random numbers all at once it's faster since it can use SIMD and a bunch of other goodies. If you want to implicitly do that without changing your code much, you can use ChunkedArrays.jl where you can use rands = ChunkedArray(randn,L) to initialize it and then everytime you want a randn(L), you instead use next(rands). Inside the ChunkedArray it actually makes bigger vectors and replenishes them as needed, but like this you can just get your randn(L) without having to keep track of all of that.
Edit:
ChunkedArrays probably only save time when L is smaller. This gives the code:
function pdpf(L::Int64, iters::Int64)
snr_dB = -10
snr = 10^(snr_dB/10)
Pf = 0.01:0.01:1
thresh = rand(100)
Pd = rand(100)
rvvar= sqrt(1+sqrt(snr)^2)
for m = 1:length(Pf)
i = 0
for k = 1:iters
y = rvvar*randn(L)
energy_fin = (y'*y) / L
#inbounds thresh[m] = erfcinv(2Pf[m]) * sqrt(2/L) + 1
if energy_fin[1] >= thresh[m]
i += 1
end
end
#inbounds Pd[m] = i/iters
end
end
which runs in half the time as using two randn calls. Indeed from the ProfileViewer we get:
#profile pdpf(1000, 10000)
using ProfileView
ProfileView.view()
I circled the two parts for the line y = rvvar*randn(L), so the vast majority of the time is random number generation. Last time I checked you could still get a decent speedup on random number generation by changing to to VSL.jl library, but you need MKL linked to your Julia build. Note that from the Google Summer of Code page you can see that there is a project to make a repo RNG.jl with faster psudo-rngs. It looks like it already has a few new ones implemented. You may want to check them out and see if they give speedups (or help out with that project!)

Optimizing repeatedly called math function

I am writing some code as part of a nonlinear regression tool and I am trying to figure out an approach for returning the nth partial derivative of a given function in a way that is a good balance of readable and fast. The function (and the analytic representations of the partials) are known at runtime, these can be hardcoded. Here's what I have so far (which works):
let getPartials (paramVect: array<float>) idx =
let a = paramVect.[0]
let b = paramVect.[1]
let c = paramVect.[2]
match idx with
| 1 -> (fun x -> (1.0+b+c*x)**(-1.0/b)) // df(x)/da
| 2 -> (fun x -> ((a*(1.0+c*b*x)**(-(b+1.0)/b))*((b*c*x+1.0)*Math.Log(b*c*x+1.0)-b*c*x))/(b*b)) // df(x)/db
| 3 -> (fun x -> -a*x*(b*c*x+1.0)**(-(b+1.0)/b)) // df/dc
| _ -> (fun x -> 0.0) //everything else is zero
The way I am using this is to first construct a partial function with the parameter vector so that I am minimizing the number of times that needs to be passed in. Then I am repeatedly calling (getPartials(i) x_val) to construct a jacobian. This function gets called an extremely large number of times over the lifecycle the program.
I am getting pretty acceptable performance with this, however, I suspect it can be improved. Profiling shows that the evaluation of the 2nd function calcuation (the long one) is a cpu drain - can this be optimized? I am unsure if the anonymous functions create a performance problem, as readable as it is...
I am brand new to F# programming, so please let me know if you spot any egregarious problems with either the style/form or the performance!
Thank you
Update: after implementing the changes suggested by JohnPalmer and refactoring so that instead of returning an anonymous function which accepts the x-value as an argument, it instead does the whole calculation in-place, I am seeing approximately a 300% speed increase. It was more convenient to be able to return the partial functions, but not worth the cost.
let getPartials (paramVect: array<float>) idx x =
let a = paramVect.[0]
let b = paramVect.[1]
let cbx = paramVect.[1] * paramVect.[2] * x
match idx with
| 1 -> (1.0+cbx)**(-1.0/b) // df(x)/da
| 2 -> ((a*(1.0+cbx)**(-(b+1.0)/b))*((cbx+1.0)*Math.Log(cbx+1.0)-cbx))/(b**2.0) // df(x)/db
| 3 -> -a*x*(cbx+1.0)**(-(b+1.0)/b) // df/dc
| _ -> 0.0 //everything else is zero
The most likely reason for the anonymous functions to cause a performance problem would be the fact that you create a new heap object each time you call the getPartials function. If you have only a small number of different paramVects then you might get some performance benefit by caching the anonymous functions.
As for the evaluation of the second expression, you might try this (taking John Palmer's suggestion to eliminate the common subexpressions):
fun x -> let bcx = b * c * x
let bcx1 = bcx + 1.0
a * bcx1 ** (-(b+1.0)/b) * (bcx1 * Math.Log bcx1 - bcx)/(b*b)

Resources