Seq.map faster than a regular for loop? - performance

I'm learning F# and one thing that preoccupies me about this language is performance. I've written a small benchmark where I compare idiomatic F# to imperative-style code written in the same language - and much to my surprise, the functional version comes out significantly faster.
The benchmark consists of:
Reading in a text file using File.ReadAllLines
Reversing the order of characters within each line
Writing back the result to the same file using File.WriteAllLines.
Here's the code:
open System
open System.IO
open System.Diagnostics
let reverseString(str:string) =
new string(Array.rev(str.ToCharArray()))
let CSharpStyle() =
let lines = File.ReadAllLines("text.txt")
for i in 0 .. lines.Length - 1 do
lines.[i] <- reverseString(lines.[i])
File.WriteAllLines("text.txt", lines)
let FSharpStyle() =
File.ReadAllLines("text.txt")
|> Seq.map reverseString
|> (fun lines -> File.WriteAllLines("text.txt", lines))
let benchmark func message =
// initial call for warm-up
func()
let sw = Stopwatch.StartNew()
for i in 0 .. 19 do
func()
printfn message sw.ElapsedMilliseconds
[<EntryPoint>]
let main args =
benchmark CSharpStyle "C# time: %d ms"
benchmark FSharpStyle "F# time: %d ms"
0
Whatever the size of the file, the "F#-style" version completes in around 75% of the time of the "C#-style" version. My question is, why is that? I see no obvious inefficiency in the imperative version.

Seq.map is different from Array.map. Because sequences (IEnumerable<T>) are not evaluated until they are enumerated, in the F#-style code no computation actually happens until File.WriteAllLines loops through the sequence (not array) generated by Seq.map.
In other words, your C#-style version is reversing all the strings and storing the reversed strings in an array, and then looping through the array to write out to the file. The F#-style version is reversing all the strings and writing them more-or-less directly to the file. That means the C#-style code is looping through the entire file three times (read to array, build reversed array, write array to file), while the F#-style code is looping through the entire file only twice (read to array, write reversed lines to file).
You'd get the best performance of all if you used File.ReadLines instead of File.ReadAllLines combined with Seq.map - but your output file would have to be different from your input file, as you'd be writing to output while still reading from input.

The Seq.map form has several advantages over a regular loop. It can precompute the function reference just once; it can avoid the variable assignments; and it can use the input sequence length to presize the result array.

Related

Fastest way to index large sorted hash file

I am building a file-based index for the sorted haveibeenpwned passwords text file and it got me wondering what's the fastest way to do this?
I figured a good way to build a quickly grep-able index would be split the sorted file into 256 files named with the first two hex digits (i.e. FF.txt, FE.txt, etc). I found ripgrep rg to be about 5 times faster than grep on my computer. So I tried something like this:
for i in {255..0}
do
start=$(date +%s)
hex="$(printf '%02x' $i | tr [:lower:] [:upper:])"
rg "^$hex" pwned-passwords-ntlm-ordered-by-hash-v4.txt > ntlm/$hex-ntlm.txt
echo 0x$hex completed in $(($(date +%s) - $start)) seconds
done
This is the fastest solution I could come up with. ripgrep is able to create each file in 25 seconds. So I'm looking at about 100 minutes to create this index. When I split the job in half, and run them in parallel, each pair of files is created in 80 seconds. So it seems best to just let ripgrep work its magic and work in series.
Obviously, I won't be indexing this list too often, but it's just fun to think about. Any thoughts on a faster way (aside from using a database) to index this file?
ripgrep, like any other tool that's able to work with unsorted input files at all, is the wrong tool for this job. When you're trying to grep sorted inputs, you want something that can bisect your input file to find a position in logarithmic time. For big enough inputs, even a slow O(log n) implementation will be faster than a highly optimized O(n) one.
pts-line-bisect is one such tool, though of course you're also welcome to write your own. You'll need to write it in a language with full access to the seek() syscall, which is not exposed in bash.
You are reading through the file 256 times, doing a full file scan every time. Consider an approach that reads the file once, writing each line into an open file descriptor. I'm thinking python would be an easy choice of implementation (if that's your thing). You could optimize by keeping the file open until you hit a new hex code at the beginning of the line. If you want to be even more clever, there is no need to go through the sorted file line by line. Based on Charles Duffy's hint, you could create a heuristic for sampling the file (using seek()) to get to the next hex value. Once the program has found the byte offset of the next hex value, the block of bytes can be written to the new file. However, since this is tagged as 'bash' let's keep the solution set in that domain:
while
read line
do
hex=${line:0:2}
echo $line >> ntlm/$hex-ntlm.txt
done < pwned-passwords-ntlm-ordered-by-hash-v4.txt
I wrote a Python3 script that solves fast binary-search lookups in the hash file without having to create an index. It doesn't directly address your question (indexing) but probably solves the underlying problem that you wanted to solve with an index - to quickly look up individual hashes. This script checks hundreds of passwords in seconds.
import argparse
import hashlib
parser = argparse.ArgumentParser(description='Searches passwords in https://haveibeenpwned.com/Passwords database.')
parser.add_argument('passwords', metavar='TEST', type=str, help='text file with passwords to test, one per line, utf-8')
parser.add_argument('database', metavar='DATABASE', type=str, help='the downloaded text file with sha-1:count')
args = parser.parse_args()
def search(f: object, pattern: str) -> str:
def search(left, right: int) -> str:
if left >= right:
return None
middle = (left + right) // 2
if middle == 0:
f.seek(0, 0)
test = f.readline()
else:
f.seek(middle - 1, 0)
_ = f.readline()
test = f.readline()
if test.upper().startswith(pattern):
return test
elif left == middle:
return None
elif pattern < test:
return search(left, middle)
else:
return search(middle, right)
f.seek(0, 2)
return search(0, f.tell())
fsource = open(args.passwords)
fdatabase = open(args.database)
source_lines = fsource.readlines()
for l in source_lines:
line = l.strip()
hash_object = hashlib.sha1(line.encode("utf-8"))
pattern = hash_object.hexdigest().upper()
print("%s:%s" % (line, str(search(fdatabase, pattern)).strip()))
fsource.close()
fdatabase.close()

Haskell: Data.Text vs. Data.Text.Lazy Performance

for training i wrote a short Haskell program as a replacement for a Perl script.
The program reads a log file which contains multi-line messages and simply joins them to produce one line per message.
My test input file has 14000 lines and a size of 1 MB.
The version using Data.Text has a runtime of 3.5 secs, the one using Data.Text.Lazy only 0.1 secs (the original perl script needs 0.2 secs).
I found in other posts that using Data.Text.Lazy would only make sense for really great amounts of data and didn't expect such a difference.
Can anybody explain the reason why or what's wrong with my program ?
The relevant part of the source (the only difference between both versions is the import Data.Text*):
{-# LANGUAGE OverloadedStrings #-}
module Main where
import Data.Char (isDigit)
import Data.Monoid ((<>))
import qualified Data.Text.Lazy as T
import qualified Data.Text.Lazy.IO as T
import System.Environment (getArgs, getProgName)
import System.IO (openFile, stdin, stdout, Handle,
IOMode(ReadMode,WriteMode))
main :: IO ()
main = do
(inp,out) <- getInput
actLog <- T.hGetContents inp
let newLog = processLog actLog
T.hPutStr out newLog
processLog :: T.Text -> T.Text
processLog = foldr joinLines "" . T.lines
joinLines :: T.Text -> T.Text -> T.Text
joinLines elem accu
| T.length elem == 0 = accu -- Blank Line
| T.head elem == ' ' = textElem <> accu -- Continuation
| isDigit (T.head elem) = "\n" <> textElem <> accu -- Start
| otherwise = accu -- Garbage
where textElem = T.strip elem
This looks like a data structures issue rather than a laziness issue. A strict Text is essentially a single big chunk of memory, while a lazy Text is essentially a linked list of strict Texts ("chunks"). The way that a lazy Text is split up into chunks isn't supposed to be part of the meaning of the value (which is just the text obtained by concatenating all the chunks). But it can have a big effect on the cost of operations.
You have a bunch of operations of the form short <> accu where accu is growing with the size of your output. If these are strict Texts, this concatenation has to copy the entire contents of both short and accu into a new strict Text value. The total runtime is necessarily quadratic. If they are lazy Texts, then <> has another option: it can prepend the list of chunks that is short to the list of chunks that is accu. This doesn't need to touch accu at all, but even if it did, the spine of the linked list of chunks that makes up accu could be much less data to process than the entire textual contents of accu, depending on the chunk size. That's why your lazy Text version is so much faster.
It looks like you could write your program in the form
processLog = T.unlines . processLines . T.lines
processLines :: [T.Text] -> [T.Text]
processLines = ...
which leaves the problem of how to concatenate up to the library function T.unlines, in which case you can expect it to be efficient whether you use strict Text or lazy Text.
The difference between the lazy and normal Data.Text is whether the entire file is read into memory at
actLog <- T.hGetContents inp
When processed lazy Haskell reads only the line directly required to produce the output. So instead of reading the entire file and then processing and writing as needed, it can now read, write and process as needed eliminating the wait for the entire file to be read in memory.

What makes this F# so slow in VS2012?

In Visual Studio 2012, why would this code execute so quickly in the interactive mode and so slowly when run as a console application? I have a fast computer, but I can count to 4 before the function completes at run-time and not even to 1 in the interactive studio window.
The other part that irks me about this is that when I test other people's F# code for Project Euler #4, they all run fine. So it leaves me to believe that there is something about this code that is not optimal. (and it was so neat and clean too >:P)
let reverse(digits:string) =
digits.ToCharArray() |> Array.rev |> System.String.Concat
let isPalindrome(number:int) =
let text = number.ToString()
if text.Length % 2 = 0 then
text = reverse(text)
else
false
let palindromes(floor:int, ceiling:int) =
seq {
for i1 = floor to ceiling do
for i2 = floor to ceiling do
let result = i1 * i2
if isPalindrome result then
yield result
}
let run =
palindromes(100, 999)
|> Seq.max
SUMMARY
For the sake of posterity, I'll list the most effective performance changes.
Removing Concat and creating a new string instead.
Retaining only the largest palindrome instead of collecting of all of them
Replacing the string version of reverse with a computational reverse
It still doesn't explain my original issue. But it's is so negligible now, it's hard to convince myself to spend any more time on it. I appreciate everyone's input. Thanks!
If you compile the project on Release mode (Optimize code option turned on), you can hardly see the difference between running the program and executing in F# Interactive.
If you have read other people's versions, you can see that checking isPalindrome can be done directly on numbers. However, a quick fix still using String:
let reverse(digits:string) =
System.String (digits.ToCharArray() |> Array.rev)
Note that string concatenation is slow compared to a String constructor call.
As #wsanville said, odd-length numbers could be palindrome too:
let isPalindrome(number:int) =
let text = number.ToString()
text = reverse(text)
In palindromes function, execution time can be cut down by half by iterating i2 starting from i1:
let palindromes(floor:int, ceiling:int) =
seq {
for i1 = floor to ceiling do
for i2 = i1 to ceiling do
let result = i1 * i2
if isPalindrome result then
yield result
}
let run =
palindromes(100, 999)
|> Seq.max
With these simple optimizations, your code run 5x faster on my machine.
The Question is quite old, and I have the same problem before
Console apps in F#(or any F# apps), loads the "Fsharp.Core.dll" file during execution,
So in order to have a faster exe file do the following.
1.)Build using "Release" mode, (say Hello.exe)
2.)Merge the file "Fsharp.Core.dll" to your exe file
Using ILMERGE(http://www.microsoft.com/en-us/download/details.aspx?id=17630)
-Open Command Prompt and CD to "Release Folder"
ILMERGE Hello.exe Fsharp.Core.dll /out:Hello2.exe
Try to run Hello2.exe, and Check if it loads faster.

Debugging HXT performance problems

I'm trying to use HXT to read in some big XML data files (hundreds of MB.)
My code has a space-leak somewhere, but I can't seem to find it. I do have a little bit of a clue as to what is happening thanks to my very limited knowledge of the ghc profiling tool chain.
Basically, the document is parsed, but not evaluated.
Here's some code:
{-# LANGUAGE Arrows, NoMonomorphismRestriction #-}
import Text.XML.HXT.Core
import System.Environment (getArgs)
import Control.Monad (liftM)
main = do file <- (liftM head getArgs) >>= parseTuba
case file of(Left m) -> print "Failed."
(Right _) -> print "Success."
data Sentence t = Sentence [Node t] deriving Show
data Node t = Word { wSurface :: !t } deriving Show
parseTuba :: FilePath -> IO (Either String ([Sentence String]))
parseTuba f = do r <- runX (readDocument [] f >>> process)
case r of
[] -> return $ Left "No parse result."
[pr] -> return $ Right pr
_ -> return $ Left "Ambiguous parse result!"
process :: (ArrowXml a) => a XmlTree ([Sentence String])
process = getChildren >>> listA (tag "sentence" >>> listA word >>> arr (\ns -> Sentence ns))
word :: (ArrowXml a) => a XmlTree (Node String)
word = tag "word" >>> getAttrValue "form" >>> arr (\s -> Word s)
-- | Gets the tag with the given name below the node.
tag :: (ArrowXml a) => String -> a XmlTree XmlTree
tag s = getChildren >>> isElem >>> hasName s
I'm trying to read a corpus file, and the structure is obviously something like <corpus><sentence><word form="Hello"/><word form="world"/></sentence></corpus>.
Even on the very small development corpus, the program takes ~15 secs to read it in, of which around 20% are GC time (that's way too much.)
In particular, a lot of data is spending way too much time in DRAG state. This is the profile:
monitoring DRAG culprits. You can see that decodeDocument gets called a lot, and its data is then stalled until the very end of the execution.
Now, I think this should be easily fixed by folding all this decodeDocument stuff into my data structures (Sentence and Word) and then the RT can forget about these thunks. The way it's currently happening though, is that the folding happens at the very end when I force evaluation by deconstruction of Either in the IO monad, where it could easily happen online. I see no reason for this, and my attempts to strictify the program have so far been in vain. I hope somebody can help me :-)
I just can't even figure out too many places to put seqs and $!s in…
One possible thing to try: the default hxt parser is strict, but there does exist a lazy parser based on tagsoup: http://hackage.haskell.org/package/hxt-tagsoup
In understand that expat can do lazy processing as well: http://hackage.haskell.org/package/hxt-expat
You may want to see if switching parsing backends, by itself, solves your issue.

Scala: Mutable vs. Immutable Object Performance - OutOfMemoryError

I wanted to compare the performance characteristics of immutable.Map and mutable.Map in Scala for a similar operation (namely, merging many maps into a single one. See this question). I have what appear to be similar implementations for both mutable and immutable maps (see below).
As a test, I generated a List containing 1,000,000 single-item Map[Int, Int] and passed this list into the functions I was testing. With sufficient memory, the results were unsurprising: ~1200ms for mutable.Map, ~1800ms for immutable.Map, and ~750ms for an imperative implementation using mutable.Map -- not sure what accounts for the huge difference there, but feel free to comment on that, too.
What did surprise me a bit, perhaps because I'm being a bit thick, is that with the default run configuration in IntelliJ 8.1, both mutable implementations hit an OutOfMemoryError, but the immutable collection did not. The immutable test did run to completion, but it did so very slowly -- it takes about 28 seconds. When I increased the max JVM memory (to about 200MB, not sure where the threshold is), I got the results above.
Anyway, here's what I really want to know:
Why do the mutable implementations run out of memory, but the immutable implementation does not? I suspect that the immutable version allows the garbage collector to run and free up memory before the mutable implementations do -- and all of those garbage collections explain the slowness of the immutable low-memory run -- but I'd like a more detailed explanation than that.
Implementations below. (Note: I don't claim that these are the best implementations possible. Feel free to suggest improvements.)
def mergeMaps[A,B](func: (B,B) => B)(listOfMaps: List[Map[A,B]]): Map[A,B] =
(Map[A,B]() /: (for (m <- listOfMaps; kv <-m) yield kv)) { (acc, kv) =>
acc + (if (acc.contains(kv._1)) kv._1 -> func(acc(kv._1), kv._2) else kv)
}
def mergeMutableMaps[A,B](func: (B,B) => B)(listOfMaps: List[mutable.Map[A,B]]): mutable.Map[A,B] =
(mutable.Map[A,B]() /: (for (m <- listOfMaps; kv <- m) yield kv)) { (acc, kv) =>
acc + (if (acc.contains(kv._1)) kv._1 -> func(acc(kv._1), kv._2) else kv)
}
def mergeMutableImperative[A,B](func: (B,B) => B)(listOfMaps: List[mutable.Map[A,B]]): mutable.Map[A,B] = {
val toReturn = mutable.Map[A,B]()
for (m <- listOfMaps; kv <- m) {
if (toReturn contains kv._1) {
toReturn(kv._1) = func(toReturn(kv._1), kv._2)
} else {
toReturn(kv._1) = kv._2
}
}
toReturn
}
Well, it really depends on what the actual type of Map you are using. Probably HashMap. Now, mutable structures like that gain performance by pre-allocating memory it expects to use. You are joining one million maps, so the final map is bound to be somewhat big. Let's see how these key/values get added:
protected def addEntry(e: Entry) {
val h = index(elemHashCode(e.key))
e.next = table(h).asInstanceOf[Entry]
table(h) = e
tableSize = tableSize + 1
if (tableSize > threshold)
resize(2 * table.length)
}
See the 2 * in the resize line? The mutable HashMap grows by doubling each time it runs out of space, while the immutable one is pretty conservative in memory usage (though existing keys will usually occupy twice the space when updated).
Now, as for other performance problems, you are creating a list of keys and values in the first two versions. That means that, before you join any maps, you already have each Tuple2 (the key/value pairs) in memory twice! Plus the overhead of List, which is small, but we are talking about more than one million elements times the overhead.
You may want to use a projection, which avoids that. Unfortunately, projection is based on Stream, which isn't very reliable for our purposes on Scala 2.7.x. Still, try this instead:
for (m <- listOfMaps.projection; kv <- m) yield kv
A Stream doesn't compute a value until it is needed. The garbage collector ought to collect the unused elements as well, as long as you don't keep a reference to the Stream's head, which seems to be the case in your algorithm.
EDIT
Complementing, a for/yield comprehension takes one or more collections and return a new collection. As often as it makes sense, the returning collection is of the same type as the original collection. So, for example, in the following code, the for-comprehension creates a new list, which is then stored inside l2. It is not val l2 = which creates the new list, but the for-comprehension.
val l = List(1,2,3)
val l2 = for (e <- l) yield e*2
Now, let's look at the code being used in the first two algorithms (minus the mutable keyword):
(Map[A,B]() /: (for (m <- listOfMaps; kv <-m) yield kv))
The foldLeft operator, here written with its /: synonymous, will be invoked on the object returned by the for-comprehension. Remember that a : at the end of an operator inverts the order of the object and the parameters.
Now, let's consider what object is this, on which foldLeft is being called. The first generator in this for-comprehension is m <- listOfMaps. We know that listOfMaps is a collection of type List[X], where X isn't really relevant here. The result of a for-comprehension on a List is always another List. The other generators aren't relevant.
So, you take this List, get all the key/values inside each Map which is a component of this List, and make a new List with all of that. That's why you are duplicating everything you have.
(in fact, it's even worse than that, because each generator creates a new collection; the collections created by the second generator are just the size of each element of listOfMaps though, and are immediately discarded after use)
The next question -- actually, the first one, but it was easier to invert the answer -- is how the use of projection helps.
When you call projection on a List, it returns new object, of type Stream (on Scala 2.7.x). At first you may think this will only make things worse, because you'll now have three copies of the List, instead of a single one. But a Stream is not pre-computed. It is lazily computed.
What that means is that the resulting object, the Stream, isn't a copy of the List, but, rather, a function that can be used to compute the Stream when required. Once computed, the result will be kept so that it doesn't need to be computed again.
Also, map, flatMap and filter of a Stream all return a new Stream, which means you can chain them all together without making a single copy of the List which created them. Since for-comprehensions with yield use these very functions, the use of Stream inside the prevent unnecessary copies of data.
Now, suppose you wrote something like this:
val kvs = for (m <- listOfMaps.projection; kv <-m) yield kv
(Map[A,B]() /: kvs) { ... }
In this case you aren't gaining anything. After assigning the Stream to kvs, the data hasn't been copied yet. Once the second line is executed, though, kvs will have computed each of its elements, and, therefore, will hold a complete copy of the data.
Now consider the original form::
(Map[A,B]() /: (for (m <- listOfMaps.projection; kv <-m) yield kv))
In this case, the Stream is used at the same time it is computed. Let's briefly look at how foldLeft for a Stream is defined:
override final def foldLeft[B](z: B)(f: (B, A) => B): B = {
if (isEmpty) z
else tail.foldLeft(f(z, head))(f)
}
If the Stream is empty, just return the accumulator. Otherwise, compute a new accumulator (f(z, head)) and then pass it and the function to the tail of the Stream.
Once f(z, head) has executed, though, there will be no remaining reference to the head. Or, in other words, nothing anywhere in the program will be pointing to the head of the Stream, and that means the garbage collector can collect it, thus freeing memory.
The end result is that each element produced by the for-comprehension will exist just briefly, while you use it to compute the accumulator. And this is how you save keeping a copy of your whole data.
Finally, there is the question of why the third algorithm does not benefit from it. Well, the third algorithm does not use yield, so no copy of any data whatsoever is being made. In this case, using projection only adds an indirection layer.

Resources