Related
I'm using this course on Machine-Learning to learn F# at the same time. I've done the following homework exercise which is the first exercise of the second week:
Run a computer simulation for flipping 1,000 virtual fair coins. Flip
each coin independently 10 times. Focus on 3 coins as follows: c1
is the first coin flipped, crand is a coin chosen randomly from
the 1,000, and cmin is the coin which had the minimum frequency of
heads (pick the earlier one in case of a tie).
Let ν1 , νrand
, and νmin be the fraction of heads obtained for the 3 respective
coins out of the 10 tosses. Run the experiment 100,000 times in order
to get a full distribution of ν1 , νrand, and νmin (note that c rand
and c min will change from run to run).
What is the average value of νmin?
I have produced the following code, which works fine and gives the correct answer:
let private rnd = System.Random()
let FlipCoin() = rnd.NextDouble() > 0.5
let FlipCoinNTimes N = List.init N (fun _ -> FlipCoin())
let FlipMCoinsNTimes M N = List.init M (fun _ -> FlipCoinNTimes N)
let ObtainFrequencyOfHeads tosses =
let heads = tosses |> List.filter (fun toss -> toss = true)
float (List.length (heads)) / float (List.length (tosses))
let GetFirstRandMinHeadsFraction allCoinsLaunchs =
let first = ObtainFrequencyOfHeads(List.head (allCoinsLaunchs))
let randomCoin = List.item (rnd.Next(List.length (allCoinsLaunchs))) allCoinsLaunchs
let random = ObtainFrequencyOfHeads(randomCoin)
let min =
allCoinsLaunchs
|> List.map (fun coin -> ObtainFrequencyOfHeads coin)
|> List.min
(first, random, min)
module Exercice1 =
let GetResult() =
Seq.init 100000 (fun _ -> FlipMCoinsNTimes 1000 10)
|> Seq.map (fun oneExperiment -> GetFirstRandMinHeadsFraction oneExperiment)
|> Seq.map (fun (first, random, min) -> min)
|> Seq.average
However, it takes roughly 4 minutes to run in my machine. I know that it is doing a lot of work, but I'm wondering if there are some modifications that could be made to optimize it.
As I'm trying lo learn F#, I'm asking for optimizations that use F# idioms, not to change the code to a C-style.
Feel free to suggest any kind of improvement, in style, good practices, etc.
[UPDATE]
I have written some code to compare the proposed solutions, it is accesible here.
These are the results:
Base - result: 0.037510, time elapsed: 00:00:55.1274883, improvement:
0.99 x
Matthew Mcveigh - result: 0.037497, time elapsed: 00:00:15.1682052, improvement: 3.61 x
Fyodor Soikin - result:0.037524, time elapsed: 00:01:29.7168787, improvement: 0.61 x
GuyCoder - result: 0.037645, time elapsed: 00:00:02.0883482, improvement: 26.25 x
GuyCoder MathNet- result: 0.037666, time elapsed:
00:00:24.7596117, improvement: 2.21 x
TheQuickBrownFox - result:
0.037494, time elapsed: 00:00:34.2831239, improvement: 1.60 x
The winner concerning the improvement in time is the GuyCoder, so I will accept his answer. However, I find that his code is more difficult to understand.
Allocating a large amount of lists up front is heavy work, the algorithm can be processed online e.g. via sequences or recursion. I transformed all the work into tail recursive functions for some raw speed (will be transformed into loops by the compiler)
not guaranteed to be 100% correct, but hopefully gives you a gist of where I was going with it:
let private rnd = System.Random()
let flipCoin () = rnd.NextDouble() > 0.5
let frequencyOfHeads flipsPerCoin =
let rec countHeads numHeads i =
if i < flipsPerCoin then
let isHead = flipCoin ()
countHeads (if isHead then numHeads + 1 else numHeads) (i + 1)
else
float numHeads
countHeads 0 0 / float flipsPerCoin
let getFirstRandMinHeadsFraction numCoins flipsPerCoin =
let randomCoinI = rnd.Next numCoins
let rec run first random min i =
if i < numCoins then
let frequency = frequencyOfHeads flipsPerCoin
let first = if i = 0 then frequency else first
let random = if i = randomCoinI then frequency else random
let min = if min > frequency then frequency else min
run first random min (i + 1)
else
(first, random, min)
run 0.0 0.0 System.Double.MaxValue 0
module Exercice1 =
let getResult () =
let iterations, numCoins, numFlips = 100000, 1000, 10
let getMinFromExperiment () =
let (_, _, min) = getFirstRandMinHeadsFraction numCoins numFlips
min
let rec sumMinFromExperiments i sumOfMin =
if i < iterations then
sumMinFromExperiments (i + 1) (sumOfMin + getMinFromExperiment ())
else
sumOfMin
let sum = sumMinFromExperiments 0 0.0
sum / float iterations
Running your code on my computer and timing I get:
seconds: 68.481918
result: 0.47570994
Running my code on my computer and timing I get:
seconds: 14.003861
vOne: 0.498963
vRnd: 0.499793
vMin: 0.037675
with vMin being closest to the correct answer of b being 0.01
That is almost 5x faster.
I did not tinker with each method and data structure to figure out why and what worked, I just used many decades of experience to guide me. Clearly not storing the intermediate values but just the results is a big improvement. Specifically coinTest just returns the number of heads which is an int and not a list of the results. Also instead of getting a random number for each coin flip but getting a random number for each coin and then using each part of that random number as a coin flip is advantageous. That saves number of flips - 1 calls to a function. Also I avoided using float values until the very end; I don't consider that saving time on the CPU, but it did simplify the thought process of thinking only in int which allowed me to concentrate on other efficiencies. I know that may sound weird but the less I have to think about the better the answers I get. I also only ran coinTest when it was necessary, e.g. only the first coin, only the random coin, and looked for all tails as an exit condition.
namespace Workspace
module main =
[<EntryPoint>]
let main argv =
let rnd = System.Random()
let randomPick (limit : int) : int = rnd.Next(limit) // [0 .. limit) it's a Python habit
let numberOfCoins = 1000
let numberOfFlips = 10
let numberOfExperiements = 100000
let coinTest (numberOfFlips : int) : int =
let rec countHeads (flips : int) bitIndex (headCount : int) : int =
if bitIndex < 0 then headCount
else countHeads (flips >>> 1) (bitIndex-1) (headCount + (flips &&& 0x01))
countHeads (randomPick ((pown 2 numberOfFlips) - 1)) numberOfFlips 0
let runExperiement (numberOfCoins : int) (numberOfFlips : int) : (int * int * int) =
let (randomCoin : int) = randomPick numberOfCoins
let rec testCoin coinIndex (cFirst, cRnd, cMin, cFirstDone, cRanDone, cMinDone) : (int * int * int) =
if (coinIndex < numberOfCoins) then
if (not cFirstDone || not cRanDone || not cMinDone) then
if (cFirstDone && cMinDone && (coinIndex <> randomCoin)) then
testCoin (coinIndex+1) (cFirst, cRnd, cMin, cFirstDone, cRanDone, cMinDone)
else
let headsTotal = coinTest numberOfFlips
let (cFirst, cRnd, cMin, cFirstDone, cRanDone, cMinDone) =
let cFirst = if coinIndex = 0 then headsTotal else cFirst
let cRnd = if coinIndex = randomCoin then headsTotal else cRnd
let cMin = if headsTotal < cMin then headsTotal else cMin
let cRanDone = if (coinIndex >= randomCoin) then true else cRanDone
let cMinDone = if (headsTotal = 0) then true else cMinDone
(cFirst, cRnd, cMin, true, cRanDone, cMinDone)
testCoin (coinIndex+1) (cFirst, cRnd, cMin, cFirstDone, cRanDone, cMinDone)
else
(cFirst, cRnd, cMin)
else
(cFirst, cRnd, cMin)
testCoin 0 (-1,-1,10, false, false, false)
let runExperiements (numberOfExperiements : int) (numberOfCoins : int) ( numberOfFlips : int) =
let rec accumateExperiements index aOne aRnd aMin : (int * int * int) =
let (cOne,cRnd,cMin) = runExperiement numberOfCoins numberOfFlips
if index > numberOfExperiements then (aOne, aRnd, aMin)
else accumateExperiements (index + 1) (aOne + cOne) (aRnd + cRnd) (aMin + cMin)
let (aOne, aRnd, aMin) = accumateExperiements 0 0 0 0
let (vOne : double) = (double)(aOne) / (double)numberOfExperiements / (double)numberOfFlips
let (vRnd : double) = (double)(aRnd) / (double)numberOfExperiements / (double)numberOfFlips
let (vMin : double) = (double)(aMin) / (double)numberOfExperiements / (double)numberOfFlips
(vOne, vRnd, vMin)
let timeIt () =
let stopWatch = System.Diagnostics.Stopwatch.StartNew()
let (vOne, vRnd, vMin) = runExperiements numberOfExperiements numberOfCoins numberOfFlips
stopWatch.Stop()
printfn "seconds: %f" (stopWatch.Elapsed.TotalMilliseconds / 1000.0)
printfn "vOne: %A" vOne
printfn "vRnd: %A" vRnd
printfn "vMin: %A" vMin
timeIt ()
printf "Press any key to exit: "
System.Console.ReadKey() |> ignore
printfn ""
0 // return an integer exit code
========================================================================
This is just an intermediate answer because I inquired if the OP considered using MathNet Numerics idiomatic F# and the OP wanted to see what that looked like. After running his version and this first cut version on my machine the OP version is faster. OP: 75 secs, mine: 84 secs
namespace Workspace
open MathNet.Numerics.LinearAlgebra
module main =
[<EntryPoint>]
let main argv =
let rnd = System.Random()
let flipCoin() =
let head = rnd.NextDouble() > 0.5
if head then 1.0 else 0.0
let numberOfCoins = 1000
let numberOfFlips = 10
let numberOfExperiements = 100000
let numberOfValues = 3
let randomPick (limit : int) : int = rnd.Next(limit) // [0 .. limit) it's a Python habit
let headCount (m : Matrix<float>) (coinIndex : int) : int =
System.Convert.ToInt32((m.Row coinIndex).Sum())
let minHeads (m : Matrix<float>) (numberOfCoins : int) (numberOfFlips : int) : int =
let rec findMinHeads currentCoinIndex minHeadsCount minHeadsIndex =
match currentCoinIndex,minHeadsCount with
| -1,_ -> minHeadsCount
| _,0 -> minHeadsCount // Can't get less than zero so stop searching.
| _ ->
let currentMinHeadCount = (headCount m currentCoinIndex)
let nextIndex = currentCoinIndex - 1
if currentMinHeadCount < minHeadsCount
then findMinHeads nextIndex currentMinHeadCount currentCoinIndex
else findMinHeads nextIndex minHeadsCount minHeadsIndex
findMinHeads (numberOfCoins - 1) numberOfFlips -1
// Return the values for cOne, cRnd, and cMin as int values.
// Will do division on final sum of experiments instead of after each experiment.
let runExperiement (numberOfCoins : int) (numberOfFlips : int) : (int * int * int) =
let (flips : Matrix<float>) = DenseMatrix.init numberOfCoins numberOfFlips (fun i j -> flipCoin())
let cOne = headCount flips 0
let cRnd = headCount flips (randomPick numberOfCoins)
let cMin = minHeads flips numberOfCoins numberOfFlips
(cOne,cRnd,cMin)
let runExperiements (numberOfExperiements : int) (numberOfCoins : int) (numberOfFlips : int) : (int [] * int [] * int []) =
let (cOneArray : int[]) = Array.create numberOfExperiements 0
let (cRndArray : int[]) = Array.create numberOfExperiements 0
let (cMinArray : int[]) = Array.create numberOfExperiements 0
for i = 0 to (numberOfExperiements - 1) do
let (cOne,cRnd,cMin) = runExperiement numberOfCoins numberOfFlips
cOneArray.[i] <- cOne
cRndArray.[i] <- cRnd
cMinArray.[i] <- cMin
(cOneArray, cRndArray, cMinArray)
let (cOneArray, cRndArray, cMinArray) = runExperiements numberOfExperiements numberOfCoins numberOfFlips
let (vOne : double) = (double)(Array.sum cOneArray) / (double)numberOfExperiements / (double)numberOfFlips
let (vRnd : double) = (double)(Array.sum cRndArray) / (double)numberOfExperiements / (double)numberOfFlips
let (vMin : double) = (double)(Array.sum cMinArray) / (double)numberOfExperiements / (double)numberOfFlips
printfn "vOne: %A" vOne
printfn "vRnd: %A" vRnd
printfn "vMin: %A" vMin
Halfway through the coding I realized I could do all of the calculations using just int, it was only the last calculations that generated the percentages that needed to be a float or double and even then that is only because the list of answers is a percentage; in theory the numbers can be compared as int to get the same understanding. If I use only int then I would have to create an int Matrix type and that is more work than I want to do. When I get time I will switch the MathNet Matrix to an F# Array2D or something similar and check that. Note if you tag this with MathNet then the maintainer of MathNet might answer (Christoph Rüegg)
I made an change to this method and it is faster by 5 seconds.
// faster
let minHeads (m : Matrix<float>) (numberOfCoins : int) (numberOfFlips : int) : int =
let (mins : float[]) = m.FoldByRow((fun (x : float) y -> x + y), 0.0)
let (minHead : float) = Array.min mins
System.Convert.ToInt32(minHead)
I tried to find the smallest possible changes to your code to make it faster.
The biggest performance improvement I found was by changing the ObtainFrequencyOfHeads function so that it counts true values in the collection instead of creating an intermediate filtered collection and then counting that. I did this by using fold:
let ObtainFrequencyOfHeads tosses =
let heads = tosses |> List.fold (fun state t -> if t then state + 1 else state) 0
float heads / float (List.length (tosses))
Another improvement came from changing all of the lists into arrays. This was as simple as replacing every instance of List. with Array. (including the new function above).
Some might say this is less functional, because it's using a mutable collection instead of an immutable one. However, we're not mutating any arrays, just using the fact that they are cheap to create, check the length of, and look up by index. We have removed a restriction on mutation but we are still not using mutation. It is certainly idiomatic F# to use arrays for performance if required.
With both of these changes I got almost a 2x performance improvement in FSI.
I've this code that iterate some samples and build a simple linear interpolation between the points:
foreach sample:
base = floor(index_pointer)
frac = index_pointer - base
out = in[base] * (1 - frac) + in[base + 1] * frac
index_pointer += speed
// restart
if(index_pointer >= sample_length)
{
index_pointer = 0
}
using "speed" equal to 1, the game is done. But if the index_pointer is different than 1 (i.e. got fractional part) I need to wrap last/first element keeping the translation consistent.
How would you do this? Double indexes?
Here's an example of values I have. Let say in array of 4 values: [8, 12, 16, 20].
It will be:
1.0*in[0] + 0.0*in[1]=8
0.28*in[0] + 0.72*in[1]=10.88
0.56*in[1] + 0.44*in[2]=13.76
0.84*in[2] + 0.14*in[3]=16.64
0.12*in[2] + 0.88*in[3]=19.52
0.4*in[3] + 0.6*in[4]=8 // wrong; here I need to wrapper
the last point is wrong. [4] will be 0 because I don't have [4], but the first part need to take care of 0.4 and the weight of first sample (I think?).
Just wrap around the indices:
out = in[base] * (1 - frac) + in[(base + 1) % N] * frac
, where % is the modulo operator and N is the number of input samples.
This procedure generates the following line for your sample data (the dashed lines are the interpolated sample points, the circles are the input values):
I think I understand the problem now (answer only applies if I really did...):
You sample values at a nominal speed sn. But actually your sampler samples at a real speed s, where s != sn. Now, you want to create a function which re-samples the series, sampled at speed s, so it yields a series as if it were sampled with speed sn by means of linear interpolation between 2 adjacent samples. Or, your sampler jitters (has variances in time when it actually samples, which is sn + Noise(sn)).
Here is my approach - a function named "re-sample". It takes the sample data and a list of desired re-sample-points.
For any re-sample point which would index outside the raw data, it returns the respective border value.
let resample (data : float array) times =
let N = Array.length data
let maxIndex = N-1
let weight (t : float) =
t - (floor t)
let interpolate x1 x2 w = x1 * (1.0 - w) + x2 * w
let interp t1 t2 w =
//printfn "t1 = %d t2 = %d w = %f" t1 t2 w
interpolate (data.[t1]) (data.[t2]) w
let inter t =
let t1 = int (floor t)
match t1 with
| x when x >= 0 && x < maxIndex ->
let t2 = t1 + 1
interp t1 t2 (weight t)
| x when x >= maxIndex -> data.[maxIndex]
| _ -> data.[0]
times
|> List.map (fun t -> t, inter t)
|> Array.ofList
let raw_data = [8; 12; 16; 20] |> List.map float |> Array.ofList
let resampled = resample raw_data [0.0..0.2..4.0]
And yields:
val resample : data:float array -> times:float list -> (float * float) []
val raw_data : float [] = [|8.0; 12.0; 16.0; 20.0|]
val resampled : (float * float) [] =
[|(0.0, 8.0); (0.2, 8.8); (0.4, 9.6); (0.6, 10.4); (0.8, 11.2); (1.0, 12.0);
(1.2, 12.8); (1.4, 13.6); (1.6, 14.4); (1.8, 15.2); (2.0, 16.0);
(2.2, 16.8); (2.4, 17.6); (2.6, 18.4); (2.8, 19.2); (3.0, 20.0);
(3.2, 20.0); (3.4, 20.0); (3.6, 20.0); (3.8, 20.0); (4.0, 20.0)|]
Now, I still fail to understand the "wrap around" part of your question. In the end, interpolation - in contrast to extrapolation is only defined for values in [0..N-1]. So it is up to you to decide if the function should produce a run time error or simply use the edge values (or 0) for time values out of bounds of your raw data array.
EDIT
As it turned out, it is about how to use a cyclic (ring) buffer for this as well.
Here, a version of the resample function, using a cyclic buffer. Along with some operations.
update adds a new sample value to the ring buffer
read reads the content a ring buffer element as if it were a normal array, indexed from [0..N-1].
initXXX functions which create the ring buffer in various forms.
length which returns the length or capacity of the ring buffer.
The ring buffer logics is factored into a module to keep it all clean.
module Cyclic =
let wrap n x = x % n // % is modulo operator, just like in C/C++
type Series = { A : float array; WritePosition : int }
let init (n : int) =
{ A = Array.init n (fun i -> 0.);
WritePosition = 0
}
let initFromArray a =
let n = Array.length a
{ A = Array.copy a;
WritePosition = 0
}
let initUseArray a =
let n = Array.length a
{ A = a;
WritePosition = 0
}
let update (sample : float ) (series : Series) =
let wrapper = wrap (Array.length series.A)
series.A.[series.WritePosition] <- sample
{ series with
WritePosition = wrapper (series.WritePosition + 1) }
let read i series =
let n = Array.length series.A
let wrapper = wrap (Array.length series.A)
series.A.[wrapper (series.WritePosition + i)]
let length (series : Series) = Array.length (series.A)
let resampleSeries (data : Cyclic.Series) times =
let N = Cyclic.length data
let maxIndex = N-1
let weight (t : float) =
t - (floor t)
let interpolate x1 x2 w = x1 * (1.0 - w) + x2 * w
let interp t1 t2 w =
interpolate (Cyclic.read t1 data) (Cyclic.read t2 data) w
let inter t =
let t1 = int (floor t)
match t1 with
| x when x >= 0 && x < maxIndex ->
let t2 = t1 + 1
interp t1 t2 (weight t)
| x when x >= maxIndex -> Cyclic.read maxIndex data
| _ -> Cyclic.read 0 data
times
|> List.map (fun t -> t, inter t)
|> Array.ofList
let input = raw_data
let rawSeries0 = Cyclic.initFromArray input
(resampleSeries rawSeries0 [0.0..0.2..4.0]) = resampled
I have recently begun learning F#. Hoping to use it to perform any mathematically heavy algorithms in C# applications and to broaden my knowledge
I have so far avoided StackOverflow as I didn't want to see the answer to this until I came to one myself.
I want to be able to write very efficient F# code, focused on performance and then maybe in other ways, such as writing in F# concisely (number of lines etc.).
Project Euler Question 4:
A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99.
Find the largest palindrome made from the product of two 3-digit numbers.
My Answer:
let IsPalindrome (x:int) = if x.ToString().ToCharArray() = Array.rev(x.ToString().ToCharArray()) then x else 0
let euler4 = [for i in [100..999] do
for j in [i..999] do yield i*j]
|> Seq.filter(fun x -> x = IsPalindrome(x)) |> Seq.max |> printf "Largest product of two 3-digit numbers is %d"
I tried using option and returning Some(x) and None in IsPalindrome but kept getting compiling errors as I was passing in an int and returning int option. I got a NullRefenceException trying to return None.Value.
Instead I return 0 if the number isn't a palindrome, these 0's go into the Sequence, unfortunately.
Maybe I could order the sequence and then get the top value? instead of using Seq.Max? Or filter out results > 1?
Would this be better? Any advice would be much appreciated, even if it's general F# advice.
Efficiency being a primary concern, using string allocation/manipulation to find a numeric palindrome seems misguided – here's my approach:
module NumericLiteralG =
let inline FromZero () = LanguagePrimitives.GenericZero
let inline FromOne () = LanguagePrimitives.GenericOne
module Euler =
let inline isNumPalindrome number =
let ten = 1G + 1G + 1G + 1G + 1G + 1G + 1G + 1G + 1G + 1G
let hundred = ten * ten
let rec findHighDiv div =
let div' = div * ten
if number / div' = 0G then div else findHighDiv div'
let rec impl n div =
div = 0G || n / div = n % ten && impl (n % div / ten) (div / hundred)
findHighDiv 1G |> impl number
let problem004 () =
{ 100 .. 999 }
|> Seq.collect (fun n -> Seq.init (1000 - n) ((+) n >> (*) n))
|> Seq.filter isNumPalindrome
|> Seq.max
Here's one way to do it:
/// handy extension for reversing a string
type System.String with
member s.Reverse() = String(Array.rev (s.ToCharArray()))
let isPalindrome x = let s = string x in s = s.Reverse()
seq {
for i in 100..999 do
for j in i..999 -> i * j
}
|> Seq.filter isPalindrome
|> Seq.max
|> printfn "The answer is: %d"
let IsPalindrom (str:string)=
let rec fn(a,b)=a>b||str.[a]=str.[b]&&fn(a+1,b-1)
fn(0,str.Length-1)
let IsIntPalindrome = (string>>IsPalindrom)
let sq={100..999}
sq|>Seq.map (fun x->sq|>Seq.map (fun y->(x,y),x*y))
|>Seq.concat|>Seq.filter (snd>>IsIntPalindrome)|>Seq.maxBy (snd)
just my solution:
let isPalin x =
x.ToString() = new string(Array.rev (x.ToString().ToCharArray()))
let isGood num seq1 = Seq.exists (fun elem -> (num % elem = 0 && (num / elem) < 999)) seq1
{998001 .. -1 .. 10000} |> Seq.filter(fun x -> isPalin x) |> Seq.filter(fun x -> isGood x {999 .. -1 .. 100}) |> Seq.nth 0
simplest way is to go from 999 to 100, because is much likley to be product of two large numbers.
j can then start from i because other way around was already tested
other optimisations would go in directions where multiplactions would go descending order, but that makes everything little more difficult. In general it is expressed as list mergeing.
Haskell (my best try in functional programming)
merge f x [] = x
merge f [] y = y
merge f (x:xs) (y:ys)
| f x y = x : merge f xs (y:ys)
| otherwise = y : merge f (x:xs) ys
compare_tuples (a,b) (c,d) = a*b >= c*d
gen_mul n = (n,n) : merge compare_tuples
( gen_mul (n-1) )
( map (\x -> (n,x)) [n-1,n-2 .. 1] )
is_product_palindrome (a,b) = x == reverse x where x = show (a*b)
main = print $ take 10 $ map ( \(a,b)->(a,b,a*b) )
$ filter is_product_palindrome $ gen_mul 9999
output (less than 1s)- first 10 palindromes =>
[(9999,9901,99000099),
(9967,9867,98344389),
(9999,9811,98100189),
(9999,9721,97200279),
(9999,9631,96300369),
(9999,9541,95400459),
(9999,9451,94500549),
(9767,9647,94222249),
(9867,9547,94200249),
(9999,9361,93600639)]
One can see that this sequence is lazy generated from large to small
Optimized version:
let Euler dgt=
let [mine;maxe]=[dgt-1;dgt]|>List.map (fun x->String.replicate x "9"|>int)
let IsPalindrom (str:string)=
let rec fn(a,b)=a>b||str.[a]=str.[b]&&fn(a+1,b-1)
fn(0,str.Length-1)
let IsIntPalindrome = (string>>IsPalindrom)
let rec fn=function
|x,y,max,a,_ when a=mine->x,y,max
|x,y,max,a,b when b=mine->fn(x,y,max,a-1,maxe)
|x,y,max,a,b->a*b|>function
|m when b=maxe&&m<max->x,y,max
|m when m>max&&IsIntPalindrome(m)->fn(a,b,m,a-1,maxe)
|m when m>max->fn(x,y,max,a,b-1)
|_->fn(x,y,max,a-1,maxe)
fn(0,0,0,maxe,maxe)
Log (switch #time on):
> Euler 2;;
Real: 00:00:00.004, CPU: 00:00:00.015, GC gen0: 0, gen1: 0, gen2: 0
val it : int * int * int = (99, 91, 9009)
> Euler 3;;
Real: 00:00:00.004, CPU: 00:00:00.015, GC gen0: 0, gen1: 0, gen2: 0
val it : int * int * int = (993, 913, 906609)
> Euler 4;;
Real: 00:00:00.002, CPU: 00:00:00.000, GC gen0: 0, gen1: 0, gen2: 0
val it : int * int * int = (9999, 9901, 99000099)
> Euler 5;;
Real: 00:00:00.702, CPU: 00:00:00.686, GC gen0: 108, gen1: 1, gen2: 0
val it : int * int * int = (99793, 99041, 1293663921) //int32 overflow
Extern to BigInteger:
let Euler dgt=
let [mine;maxe]=[dgt-1;dgt]|>List.map (fun x->new System.Numerics.BigInteger(String.replicate x "9"|>int))
let IsPalindrom (str:string)=
let rec fn(a,b)=a>b||str.[a]=str.[b]&&fn(a+1,b-1)
fn(0,str.Length-1)
let IsIntPalindrome = (string>>IsPalindrom)
let rec fn=function
|x,y,max,a,_ when a=mine->x,y,max
|x,y,max,a,b when b=mine->fn(x,y,max,a-1I,maxe)
|x,y,max,a,b->a*b|>function
|m when b=maxe&&m<max->x,y,max
|m when m>max&&IsIntPalindrome(m)->fn(a,b,m,a-1I,maxe)
|m when m>max->fn(x,y,max,a,b-1I)
|_->fn(x,y,max,a-1I,maxe)
fn(0I,0I,0I,maxe,maxe)
Check:
Euler 5;;
Real: 00:00:02.658, CPU: 00:00:02.605, GC gen0: 592, gen1: 1, gen2: 0
val it :
System.Numerics.BigInteger * System.Numerics.BigInteger *
System.Numerics.BigInteger =
(99979 {...}, 99681 {...}, 9966006699 {...})
I am currently trying to improve the performance of an F# program to make it as fast as its C# equivalent. The program does apply a filter array to a buffer of pixels. Access to memory is always done using pointers.
Here is the C# code which is applied to each pixel of an image:
unsafe private static byte getPixelValue(byte* buffer, double* filter, int filterLength, double filterSum)
{
double sum = 0.0;
for (int i = 0; i < filterLength; ++i)
{
sum += (*buffer) * (*filter);
++buffer;
++filter;
}
sum = sum / filterSum;
if (sum > 255) return 255;
if (sum < 0) return 0;
return (byte) sum;
}
The F# code looks like this and takes three times as long as the C# program:
let getPixelValue (buffer:nativeptr<byte>) (filterData:nativeptr<float>) filterLength filterSum : byte =
let rec accumulatePixel (acc:float) (buffer:nativeptr<byte>) (filter:nativeptr<float>) i =
if i > 0 then
let newAcc = acc + (float (NativePtr.read buffer) * (NativePtr.read filter))
accumulatePixel newAcc (NativePtr.add buffer 1) (NativePtr.add filter 1) (i-1)
else
acc
let acc = (accumulatePixel 0.0 buffer filterData filterLength) / filterSum
match acc with
| _ when acc > 255.0 -> 255uy
| _ when acc < 0.0 -> 0uy
| _ -> byte acc
Using mutable Variables and a for loop in F# does result in the same speed as using recursion. All Projects are configured to run in Release Mode with Code Optimization turned on.
How could the performance of the F# version be improved?
EDIT:
The bottleneck seems to be in (NativePtr.get buffer offset). If I replace this code with a fixed value and also replace the corresponding code in the C# version with a fixed value, I get about the same speed for both programs. In fact, in C# the speed does not change at all, but in F# it makes a huge difference.
Can this behaviour possibly be changed or is it rooted deeply in the architecture of F#?
EDIT 2:
I refactored the code again to use for-loops. The execution speed remains the same:
let mutable acc <- 0.0
let mutable f <- filterData
let mutable b <- tBuffer
for i in 1 .. filter.FilterLength do
acc <- acc + (float (NativePtr.read b)) * (NativePtr.read f)
f <- NativePtr.add f 1
b <- NativePtr.add b 1
If I compare the IL code of a version that uses (NativePtr.read b) and another version that is the same except that it uses a fixed value 111uy instead of reading it from the pointer, Only the following lines in the IL code change:
111uy has IL-Code ldc.i4.s 0x6f (0.3 seconds)
(NativePtr.read b) has IL-Code lines ldloc.s b and ldobj uint8 (1.4 seconds)
For comparison: C# does the filtering in 0.4 seconds.
The fact that reading the filter does not impact performance while reading from the image buffer does is somehow confusing. Before I filter a line of the image I copy the line into a buffer that has the length of a line. That's why the read operations are not spread all over the image but are within this buffer, which has a size of about 800 bytes.
If we look at the actual IL code of the inner loop which traverses both buffers in parallel generated by C# compiler (relevant part):
L_0017: ldarg.0
L_0018: ldc.i4.1
L_0019: conv.i
L_001a: add
L_001b: starg.s buffer
L_001d: ldarg.1
L_001e: ldc.i4.8
L_001f: conv.i
L_0020: add
and F# compiler:
L_0017: ldc.i4.1
L_0018: conv.i
L_0019: sizeof uint8
L_001f: mul
L_0020: add
L_0021: ldarg.2
L_0022: ldc.i4.1
L_0023: conv.i
L_0024: sizeof float64
L_002a: mul
L_002b: add
we'll notice that while C# code uses only add operator while F# needs both mul and add. But obviously on each step we only need to increment pointers (by 'sizeof byte' and 'sizeof float' values respectively), not to calculate address (addrBase + (sizeof byte)) F# mul is unnecessary (it always multiplies by 1).
The cause for that is that C# defines ++ operator for pointers while F# provides only add : nativeptr<'T> -> int -> nativeptr<'T> operator:
[<NoDynamicInvocation>]
let inline add (x : nativeptr<'a>) (n:int) : nativeptr<'a> = to_nativeint x + nativeint n * (# "sizeof !0" type('a) : nativeint #) |> of_nativeint
So it's not "rooted deeply" in F#, it's just that module NativePtr lacks inc and dec functions.
Btw, I suspect the above sample could be written in a more concise manner if the arguments were passed as arrays instead of raw pointers.
UPDATE:
So does the following code have only 1% speed up (it seems to generate very similar to C# IL):
let getPixelValue (buffer:nativeptr<byte>) (filterData:nativeptr<float>) filterLength filterSum : byte =
let rec accumulatePixel (acc:float) (buffer:nativeptr<byte>) (filter:nativeptr<float>) i =
if i > 0 then
let newAcc = acc + (float (NativePtr.read buffer) * (NativePtr.read filter))
accumulatePixel newAcc (NativePtr.ofNativeInt <| (NativePtr.toNativeInt buffer) + (nativeint 1)) (NativePtr.ofNativeInt <| (NativePtr.toNativeInt filter) + (nativeint 8)) (i-1)
else
acc
let acc = (accumulatePixel 0.0 buffer filterData filterLength) / filterSum
match acc with
| _ when acc > 255.0 -> 255uy
| _ when acc < 0.0 -> 0uy
| _ -> byte acc
Another thought: it might also depend on the number of calls to getPixelValue your test does (F# splits this function into two methods while C# does it in one).
Is it possible that you post your testing code here?
Regarding array - I'd expect the code be at least more concise (and not unsafe).
UPDATE #2:
Looks like the actual bottleneck here is byte->float conversion.
C#:
L_0003: ldarg.1
L_0004: ldind.u1
L_0005: conv.r8
F#:
L_000c: ldarg.1
L_000d: ldobj uint8
L_0012: conv.r.un
L_0013: conv.r8
For some reason F# uses the following path: byte->float32->float64 while C# does only byte->float64. Not sure why is that, but with the following hack my F# version runs with the same speed as C# on gradbot test sample (BTW, thanks gradbot for the test!):
let inline preadConvert (p : nativeptr<byte>) = (# "conv.r8" (# "ldobj !0" type (byte) p : byte #) : float #)
let inline pinc (x : nativeptr<'a>) : nativeptr<'a> = NativePtr.toNativeInt x + (# "sizeof !0" type('a) : nativeint #) |> NativePtr.ofNativeInt
let rec accumulatePixel_ed (acc, buffer, filter, i) =
if i > 0 then
accumulatePixel_ed
(acc + (preadConvert buffer) * (NativePtr.read filter),
(pinc buffer),
(pinc filter),
(i-1))
else
acc
Results:
adrian 6374985677.162810 1408.870900 ms
gradbot 6374985677.162810 1218.908200 ms
C# 6374985677.162810 227.832800 ms
C# Offset 6374985677.162810 224.921000 ms
mutable 6374985677.162810 1254.337300 ms
ed'ka 6374985677.162810 227.543100 ms
LAST UPDATE
It turned out that we can achieve the same speed even without any hacks:
let rec accumulatePixel_ed_last (acc, buffer, filter, i) =
if i > 0 then
accumulatePixel_ed_last
(acc + (float << int16 <| NativePtr.read buffer) * (NativePtr.read filter),
(NativePtr.add buffer 1),
(NativePtr.add filter 1),
(i-1))
else
acc
All we need to do is to convert byte into, say int16 and then into float. This way 'costly' conv.r.un instruction will be avoided.
PS Relevant conversion code from "prim-types.fs" :
let inline float (x: ^a) =
(^a : (static member ToDouble : ^a -> float) (x))
when ^a : float = (# "" x : float #)
when ^a : float32 = (# "conv.r8" x : float #)
// [skipped]
when ^a : int16 = (# "conv.r8" x : float #)
// [skipped]
when ^a : byte = (# "conv.r.un conv.r8" x : float #)
when ^a : decimal = (System.Convert.ToDouble((# "" x : decimal #)))
How does this compare? It has less calls to NativePtr.
let getPixelValue (buffer:nativeptr<byte>) (filterData:nativeptr<float>) filterLength filterSum : byte =
let accumulatePixel (acc:float) (buffer:nativeptr<byte>) (filter:nativeptr<float>) length =
let rec accumulate acc offset =
if offset < length then
let newAcc = acc + (float (NativePtr.get buffer offset) * (NativePtr.get filter offset))
accumulate newAcc (offset + 1)
else
acc
accumulate acc 0
let acc = (accumulatePixel 0.0 buffer filterData filterLength) / filterSum
match acc with
| _ when acc > 255.0 -> 255uy
| _ when acc < 0.0 -> 0uy
| _ -> byte acc
F# source code of NativePtr.
[<NoDynamicInvocation>]
[<CompiledName("AddPointerInlined")>]
let inline add (x : nativeptr<'T>) (n:int) : nativeptr<'T> = toNativeInt x + nativeint n * (# "sizeof !0" type('T) : nativeint #) |> ofNativeInt
[<NoDynamicInvocation>]
[<CompiledName("GetPointerInlined")>]
let inline get (p : nativeptr<'T>) n = (# "ldobj !0" type ('T) (add p n) : 'T #)
My results on a larger test.
adrian 6374730426.098020 1561.102500 ms
gradbot 6374730426.098020 1842.768000 ms
C# 6374730426.098020 150.793500 ms
C# Offset 6374730426.098020 150.318900 ms
mutable 6374730426.098020 1446.616700 ms
F# test code
open Microsoft.FSharp.NativeInterop
open System.Runtime.InteropServices
open System.Diagnostics
open AccumulatePixel
#nowarn "9"
let test size fn =
let bufferByte = Marshal.AllocHGlobal(size * 4)
let bufferFloat = Marshal.AllocHGlobal(size * 8)
let bi = NativePtr.ofNativeInt bufferByte
let bf = NativePtr.ofNativeInt bufferFloat
let random = System.Random()
for i in 1 .. size do
NativePtr.set bi i (byte <| random.Next() % 256)
NativePtr.set bf i (random.NextDouble())
let duration (f, name) =
let stopWatch = Stopwatch.StartNew()
let time = f(0.0, bi, bf, size)
stopWatch.Stop()
printfn "%10s %f %f ms" name time stopWatch.Elapsed.TotalMilliseconds
List.iter duration fn
Marshal.FreeHGlobal bufferFloat
Marshal.FreeHGlobal bufferByte
let rec accumulatePixel_adrian (acc, buffer, filter, i) =
if i > 0 then
let newAcc = acc + (float (NativePtr.read buffer) * (NativePtr.read filter))
accumulatePixel_adrian (newAcc, (NativePtr.add buffer 1), (NativePtr.add filter 1), (i - 1))
else
acc
let accumulatePixel_gradbot (acc, buffer, filter, length) =
let rec accumulate acc offset =
if offset < length then
let newAcc = acc + (float (NativePtr.get buffer offset) * (NativePtr.get filter offset))
accumulate newAcc (offset + 1)
else
acc
accumulate acc 0
let accumulatePixel_mutable (acc, buffer, filter, length) =
let mutable acc = 0.0
let mutable f = filter
let mutable b = buffer
for i in 1 .. length do
acc <- acc + (float (NativePtr.read b)) * (NativePtr.read f)
f <- NativePtr.add f 1
b <- NativePtr.add b 1
acc
[
accumulatePixel_adrian, "adrian";
accumulatePixel_gradbot, "gradbot";
AccumulatePixel.getPixelValue, "C#";
AccumulatePixel.getPixelValueOffset, "C# Offset";
accumulatePixel_mutable, "mutable";
]
|> test 100000000
System.Console.ReadLine() |> ignore
C# test code
namespace AccumulatePixel
{
public class AccumulatePixel
{
unsafe public static double getPixelValue(double sum, byte* buffer, double* filter, int filterLength)
{
for (int i = 0; i < filterLength; ++i)
{
sum += (*buffer) * (*filter);
++buffer;
++filter;
}
return sum;
}
unsafe public static double getPixelValueOffset(double sum, byte* buffer, double* filter, int filterLength)
{
for (int i = 0; i < filterLength; ++i)
{
sum += buffer[i] * filter[i];
}
return sum;
}
}
}
I'm trying to learn F# by translating some Haskell code I wrote a very long time ago, but I'm stuck!
percent :: Int -> Int -> Float
percent a b = (fromInt a / fromInt b) * 100
freqs :: String -> [Float]
freqs ws = [percent (count x ws) (lowers ws) | x <- ['a' .. 'z']]
I've managed this:
let percent a b = (float a / float b) * 100.
although i dont like having to have the . after the 100.
What is the name of the operation I am performing in freqs, and how do I translate it to F#?
Edit: count and lowers are Char -> String -> Int and String -> Int respectively, and I have translated these already.
This is a list comprehension, and in F# it looks like the last two lines below:
// stub out since don't know the implementation
let count (c:char) (s:string) = 4
let lowers (s:string) = 10
// your code
let percent a b = (float a / float b) * 100.
let freq ws = [for x in ['a'..'z'] do
yield percent (count x ws) (lowers ws)]
More generally I think Haskell list comprehensions have the form suggested by the example below, and the corresponding F# is shown.
// Haskell
// [e(x,y) | x <- l1, y <- l2, pred(x,y)]
// F#
[for x in l1 do
for y in l2 do
if pred(x,y) then
yield e(x,y)]
Note that Brian's F# code:
let freq ws = [for x in ['a'..'z'] do yield percent (count x ws) (lowers ws)]
Can be written more elegantly as:
let freq ws = [for x in 'a'..'z' -> percent (count x ws) (lowers ws)]